Among the alt-right, the distinction of “platform” and “publisher” has become the rallying cry for imposing liability on internet platforms who toss off the unwoke voices of their favs. How dare these platforms censor the voice opposing social justice and create the impression they don’t exist, that there is no one on the internet challenging progressivism or identity politics, that there is no one to speak for them?!?
If platforms want the benefit of the law, they must not favor one political agenda and be fair and balanced in their content. That’s what the law requires, they say with certainty and fervency. Except Section 230 of the Communications Decency Act says no such thing. It never did. The argument is absolutely baseless, no matter how passionately it’s believed. It’s a catchy argument, both because it serves their purpose and makes sense, in a self-serving sort of way, but it’s false.
And then there’s the other team.
In a surprising statement, former Vice President and current Democratic presidential candidate Joe Biden announced that, if elected president, he’d seek to repeal one of the most crucial pieces of legislation related to digital platforms, Section 230 of the Communications Decency Act. According to this 1996 provision, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Biden, who has personally clashed with Facebook to have a defamatory political ad removed, said he wants to make social networks more accountable for the content they host, the way newspapers already are.
Hate speech? Russians manipulating the news? Fake news? It’s all there, and the websites that host it cannot be held liable for it because of Section 230. Like the alt-right, Biden says it has to go.
“[The New York Times] can’t write something you know to be false and be exempt from being sued. But [Mark Zuckerberg] can. The idea that it’s a tech company is that Section 230 should be revoked, immediately should be revoked, number one. For Zuckerberg and other platforms,” Biden said in an interview with The New York Times.
Comparing user-generated content on Facebook with articles by NYT employed journalists suggests that Biden is really bad at analogies, almost Frankian bad, but then, he’s playing to a crowd and they aren’t aware enough to get it either, so he may just be selling the lie to pander for votes. He’s a politician, so it comes naturally.
There are occasional comments at SJ. I allow some to post. I trash others. Sometimes I edit a comment. If I post them, and the comment defames someone, I’m not liable. If I don’t post a comment and you’re outraged, tough nuggies. I’m not liable. If I edit a comment, I’m still not liable. But if I defame someone in a post, like the one I’m writing at this very minute, then I’m as liable as anyone else. Much as I’m responsible for content I create, I can allow you to post content on my blawg because of Section 230. Without it, I’m not taking responsibility for you. It’s not that I don’t love you. I do. Just not that much.
A group at Stigler Center at the University of Chicago Booth School of Business has come up with what they believe to be a way to make platforms more “accountable.” They begin with a bit of puffery by way of an appeal to expertise.
Structural reform of Section 230 is also one of the policy proposals made by the recent Stigler Center Committee on Digital Platforms’ final report. The independent and non-partisan Committee—composed of more than 30 highly-respected academics, policymakers, and experts—spent over a year studying in-depth how digital platforms such as Google and Facebook impact the economy and antitrust laws, data protection, the political system, and the news media industry.
Wow. Experts. Highly-respected academics, policymakers, [their oxford comma] and experts. They must know stuff, even if they neglect to mention the word “law,” which might be relevant since Section 230 is, after all, law. That said, they offer an interesting argument.
We look at Section 230 as a speech subsidy that ought to be conditioned on public interest requirements, at least for the largest intermediaries who benefit most and need it least. It is a speech subsidy not altogether different from the provision of spectrum licenses to broadcasters or rights of way to cable providers or orbital slots to satellite operators.
Is it “not altogether different” from spectrum licenses and rights of way? Anyone and everyone can get a URL, start a website and enjoy the mad rush of people stopping by. There’s no limit to the spectrum, such that it gets doled out by the government, nor taking of private property or use of public property to put up poles and wires. Not even control of how much space junk gets its own orbit. Still.
The public and news producers pay for this subsidy. The public foregoes legal recourse against platforms and otherwise sustains the costs of harmful speech. News producers bear the risk of actionable speech, while at the same time losing advertising revenue to the platforms freed of that risk. Media entities have to spend significant resources to avoid legal exposure, including by instituting fact-checking and editing procedures and by defending against lawsuits. These lawsuits can be fatal, as in the case of Gawker Media. More commonly, they face the threat of “death by ten thousand duck-bites” of lawsuits even if those suits are ultimately meritless.
Gawker might not be the best example, as it was the perfect storm of outrageously bad law. But the point that there is a benefit of value, even if intangible, has some merit. But does that constitute a public subsidy, the forbearance of suit so that the platform can be whatever it chooses to be without fear of being Gawkerized? Is that subsidy, which allows a Facebook, a Twitter, a Google, to exist without fear of “death by ten thousand duck bites” a justification to impose liability for “the costs of harmful speech”?
Then again, even if Section 230 confers a benefit for which a duty should be imposed, who gets to decide what that harmful speech is, whose voice can be silenced from social media and whose voice must be allowed, what words are unworthy, ideas so harmful, that they are the price of the subsidy? I’m pretty sure it won’t be me. It probably won’t be you, either. Would you leave it to Trump? To Biden? To the “experts” at the Stigler Center?
Or maybe the “subsidy” analogy isn’t as apt as it seems, as the same insulation that Section 230 provides for the Twitters is what allows Twitter to exist in the first place. Compel it to police whatever someone deems harmful speech and there’s a good chance there would be no Twitter. Then again, some might well argue that wouldn’t be a bad thing either.