Back in March, Cliff Schecter’s left-leaning political podcast, “Blue Amp,” was inexplicably demonetized by the streaming platform — meaning, he’d no longer earn ad revenue from his content.
After several attempts to contact YouTube, he eventually learned he was being punished for spreading election misinformation.
The only problem?
He wasn’t.
The video that got Schehter into trouble featured one of his guests, political consultant Lauren Windsor, merely discussing false claims of election misinformation.
“The algorithm didn’t tell the difference between terms used to spread election conspiracies and those debunking them,” says Schecter.
Even after the review, it took another week before his channel was reinstated.
It’s not just politics running into trouble with YouTube.
Bob, who runs a popular health and nutrition channel —he has just over 3.5 million subscribers — noticed that several of his videos extolling the benefits of keto and fasting were inexplicably demonetized or removed from search functionality on YouTube.
In the hierarchy of contemporary controversial science ideas — where vaccines and climate change still hold the top spots — it’s hard to imagine the keto diet causing an uproar.
Bob, who declined to share his identity over fear of further silencing by YouTube, still isn’t sure what happened, but “as a large content creator that roots his content in evidence, this is very frustrating to say the least. The scientific method is being threatened,” he says.
Accusations of censorship on YouTube are nothing new.
It’s been happening since 2017, when the platform began demonetizing videos and channels to make the site more “brand safe” for advertisers.
But restricting Nazi channels and other hate speech is one thing.
The more that YouTube has tried to iron out the kinks in their moderation process—since March, they’ve eased restrictions on swearing in videos—the more nebulous and concerning the process has become.
And whether it’s podcaster Bret Weinstein getting demonetized in 2021 for promoting alternative COVID treatments (YouTube’s message, according to Weinstein, was “Drop the science and stick to the narrative—or else”), or recent automatic bans for sharing too much footage from the Israel-Hamas war or Ukraine, YouTube guidelines are fuzzier than ever.
YouTube’s list of content guidelines might seem straightforward at first, but the devil is in the details.
Who exactly determines what’s “shocking” or “controversial” or even “sensitive”?
A YouTube spokesperson summed up the company’s user policy to The Post like so: “YouTube is built on the premise of openness, which has led to an incredible array of diverse voices and perspectives across the platform — but none of this would be possible without our commitment to protect our community from harmful content.”
Restricting harmful content makes sense when it means taking down videos that feature life-threatening behavior or demonstrate how to break the law.
But unpopular opinions, perfectly legal activities, and even unorthodox health advice don’t necessarily fall under that same category.
The more that YouTube defends its practices, the more it runs into obvious contradictions.
“Our policies are content based, not speaker based,” a YouTube spokesperson told us confusingly. “Our advertiser-friendly guidelines don’t look at the creator, but rather the content of the videos themselves.”
That would be news to Russell Brand, who lost the right to monetize his YouTube channel, with 6.6 million subscribers, not because of any of his videos but because he’s been accused (but not yet charged) with sexual assault.
In other words, they looked at the creator, not the content.
In YouTube’s defense, the task of moderating user content, with an estimated 700,000 hours of videos uploaded to the site every day by its 37 million channels, is no small feat.
During the early days of the pandemic, YouTube warned creators that they were understaffed and would be pulling down more videos than usual, including many that didn’t actually violate any policies.
Even today, the platform uses “a combination of machine learning technology and human review,” scanning for words or phrases that could be problematic, the YouTube rep said.
The problem with AI, of course, is that artificial intelligence is terrible at identifying context, nuance, and intention. It’s like employing an Alexa virtual assistant to babysit your kids.
Schecter, whose channel has managed to avoid another disciplinary action from YouTube, says he’s lost faith in the platform.
And if YouTube’s muddled moderation efforts continue unchecked, he’s likely to not be the only one.
“The problem with YouTube is there is never anybody home,” he said. “It’s like every movie about the fear of a future with no humans to reach and AI controlling everything.”
This story originally appeared on NYPost