At Techdirt, Mike Masnick has received a flurry of press releases in response to YouTube’s decision not to moderate election misinformation.
Judging by the number of very angry press releases that landed in my inbox this past Friday, you’d think that YouTube had decided to personally burn down democracy. You see, that day the company announced an update to its approach to moderating election misinformation, effectively saying that it would no longer try to police most such misinformation regarding the legitimacy of the 2020 election:
We first instituted a provision of our elections misinformation policy focused on the integrity of past US Presidential elections in December 2020, once the states’ safe harbor date for certification had passed. Two years, tens of thousands of video removals, and one election cycle later, we recognized it was time to reevaluate the effects of this policy in today’s changed landscape. In the current environment, we find that while removing this content does curb some misinformation, it could also have the unintended effect of curtailing political speech without meaningfully reducing the risk of violence or other real-world harm. With that in mind, and with 2024 campaigns well underway, we will stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections. This goes into effect today, Friday, June 2. As with any update to our policies, we carefully deliberated this change.
There was once a time when distinguishing between truth and lies was the personal responsibility of each individual, but that was back in the old days when the notion of personal responsibility didn’t cause people to get PTSD. To be fair, many have been intentionally misled by misinformation and have willingly consumed it, believing that they were taking personal responsibility for seeking out the truth by believing the most outlandish and baseless lies, like the election was stolen from Trump.
Is YouTube’s demurring from the role of misinformation arbiter a reflection of its conclusion that it’s not misinformation? It’s not they are saying misinformation isn’t happening, at least when it comes to “hard” information about voting, or that they will let anything go.
All of our election misinformation policies remain in place, including those that disallow content aiming to mislead voters about the time, place, means, or eligibility requirements for voting; false claims that could materially discourage voting, including those disputing the validity of voting by mail; and content that encourages others to interfere with democratic processes.
Why then has YouTube decided to take a hands-off approach to some misinfo but not others? Masnick offers four possible reasons for YouTube’s new position.
- Realizing the moderation had gone too far. Basically, a version of what the company was saying publicly. They realized that in trying to enforce a ban against 2020 election misinfo was, in fact, catching too much legitimate debate. While many are dismissing this, it seems like a very real possibility. Remember, content moderation at scale is impossible to do well, and it frequently involves mistakes. And it seems likely that the mistakes are even more likely to occur with video, in which more legitimate political discourse is mistaken for disinformation and removed. This could include things like legitimate discussions on the problems of electronic voting machines, or questions about building up more resilient election systems which could be accidentally flagged as disinfo.
- Realizing that removing false claims wasn’t making a difference. This is something of a corollary to the first item, and is hinted at in the statement above. Unfortunately, this remains a very under-studied area of content moderation (there are some studies, but much more research is needed): how effective are bans and removals on stopping the spread of malicious disinformation. As we’ve discussed in a somewhat different context, it’s really unclear that online disinformation is actually as powerful as some make it out to be. And if removing that information is not having much of an impact, then it may not be worth the overall effort.
- The world has moved on. To me, this seems like the most likely actual reason. Most folks in the US have basically decided to believe what they believe. That could be that (as all of the actual evidence shows) that the 2020 election was perfectly fair and Joe Biden was the rightful winner or (as no actual evidence supports), the whole thing was “rigged” and Trump should have won. No one’s changing their mind at this point, and no YouTube video is going to convince people one way or the other. And, at this point, this particular issue is so far in the rearview mirror that the cost of continuing to monitor for this bit of misinfo just isn’t worth it for the lack of any benefit or movement in people’s beliefs.
- YouTube is worried about a Republican government in 2025. This is the cynical take. Since 2020 election denialism is now a key plank of the GOP platform, the company may be deciding to “play nice” with the disinformation peddling part of the GOP (which has moved from the fringe to the mainstream) and has decided that this is a more defensible position for inevitable hearings/bad legislation/etc.
As Mike says, their motivation is likely a combination of these factors rather than one alone, but contrary to the press releases in his in-box, he takes the position that this isn’t the end of the world as we know it.
But it does strike me that the out-and-out freakout among some, claiming that this proves the world is ending may not be accurate. I’m all for companies deciding they don’t want to host certain content because they don’t want to be associated with it, but we’re still learning whether or not bans are the most effective tool in dealing with blatant misinformation and disinformation, and it’s quite possible that leaving certain claims alone is actually a reasonable policy in some case.
Putting aside the difficulties, whether at scale or just individually, of determining what is, in fact, misinformation, does the act of removing content elevate the perception of its validity in the minds of those who tend to believe any absurd conspiracy that favors their tribe? While it is hard to draw the line separating the potentially false from the potentially true, there are some assertions so obviously false that the only thing they accomplish is making the nutjobs even nuttier and, possibly, more dangerous by believing everything is an existential threat worthy dying, or killing, for. Sadly, there are some who buy into this crap and do terrible harm, all the while believing that they’re doing the right thing and taking personal responsibility.
It is now caveat empty, as in a bunch of empty vessels unable or unwilling to adhere to caveat emptor. Mon dieu.
>There was once a time when distinguishing between truth and lies was the personal responsibility of each individual, but that was back in the old days when the notion of personal responsibility didn’t cause people to get PTSD.
This “freakout” would be a lot easier to take seriously if these same people had a consistent approach to misinformation.
The decision was most likely driven by economics and market share. Google isn’t in business to be the guardian of democracy. Any press release on this decision is probably produced by a PR firm.
Fact checking seemed like a good idea until Trump and Covid came along, I have a strong biology background and understand genetics. It was clear to me early on that Covid probably accidentally escaped from the Wuhan lab and later that vaccines are life saving. It was hard to find a news source that combined these two commonsense views.
I have conversations with my wife and I say something as a fact. If she acts surprised, I question myself and wonder whether my source was accurate. Did I get my information from the media or was it a reputable source?
Ironically, the only sources I trust to provide facts are opinion blogs like Simple Justice. The media has beclowned itself too much to be useful without extensive conformation.
I agree and think that Mike Masnick is largely ignoring the economic aspect. While Google may lose some viewers or advertisers for explicitly stating its intention to stay out of the fray now, it might be the preferable decision to being accused (fairly or not) of trying to put its thumb on the scale once Election Season starts in earnest and losing even more.
Google and YouTube, doubtless to the chagrin of many, are not in the business of giving tummy rubs to people because they hold the right beliefs or say the right things. They are in the business of making money. Taking actions that risk alienating a significant chunk of your customer base is bad for business. Some businesses have forgotten this, and we have seen recently how devastating that has been for their bottom lines.
Also, it looks like many of these commentators are moving their resources to Twitter where they are less likely to be banned, throttled, or demonetized. And it seems like Twitter’s audience is more likely to pay a subscription fee.
I don’t know if author forgot to mention this or deliberately omitted it due to Techdirt’s animosity with Twitter.