From the sanitary perspective, it’s a bizarre decision lacking both sound rationale or respect for precedent. The Fifth Circuit got it wrong.
Yesterday, the US Court of Appeals for the Fifth Circuit upheld Texas’ law banning major social media websites from using most forms of content moderation. The decision is at odds with a recent Eleventh Circuit ruling striking down Florida’s similar law (written by prominent conservative Trump appointee Judge Kevin Newsom). In May, the Supreme Court signaled that at least five justices believe the law to be unconstitutional, when it overturned a previous Fifth Circuit ruling lifting a trial court injunction against implementation of the Texas law. For reasons I summarized here, I agree with the Eleventh Circuit’s approach, and believe the Texas and Florida laws violate the First Amendment’s guarantee of freedom of speech. In this post, I argue that these laws also violate the Takings Clause of the Fifth Amendment.
From a more free speech focused perspective, the decision in Netchoice v. Paxton was batshit crazy.
A Texas statute named House Bill 20 generally prohibits large social media platforms from censoring speech based on the viewpoint of its speaker. The platforms urge us to hold that the statute is facially unconstitutional and hence cannot be applied to anyone at any time and under any circumstances.
In urging such sweeping relief, the platforms offer a rather odd inversion of the First Amendment. That Amendment, of course, protects every person’s right to “the freedom of speech.” But the platforms argue that buried somewhere in the person’s enumerated right to free speech lies a corporation’s unenumerated right to muzzle speech.
Judge Andrew Oldham, former counsel to Gov. Abbott and appointed by Trump, opens with a bit of snark about “a corporation’s unenumerated right to muzzle speech,” not that he would reverse Citizens United but muzzling speech, to Judge Oldham, is entirely different than speaking because reasons. But some wag may ask a question of Judge Oldham’s swipe against muzzlers. Aren’t they private businesses such that the First Amendment isn’t applicable?
The implications of the platforms’ argument are staggering. On the platforms’ view, email providers, mobile phone companies, and banks could cancel the accounts of anyone who sends an email, makes a phone call, or spends money in support of a disfavored political party, candidate, or business. What’s worse, the platforms argue that a business can acquire a dominant market position by holding itself out as open to everyone—as Twitter did in championing itself as “the free speech wing of the free speech party.” Then, having cemented itself as the monopolist of “the modern public square,” Packingham v. North Carolina (2017), Twitter unapologetically argues that it could turn around and ban all pro-LGBT speech for no other reason than its employees want to pick on members of that community, Oral Arg. at 22:39–22:52.
Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say. Because the district court held otherwise, we reverse its injunction and remand for further proceedings.
Private businesses don’t “censor” so much as don’t host. They can’t, and don’t, stop people from saying any damn thing they want to say. They just make them do so elsewhere, which is really what the problem is, and why concurring judge Edith Jones tried to tie her view to PruneYard and FAIR.
Functioning as conduits for both makers and recipients of speech, the platforms’ businesses are closer analytically to the holdings of the Supreme Court in PruneYard and FAIR than to Miami Herald, Pacific Gas & Electric, and Hurley. It follows from the first two cases that in arbitrarily excluding from their platforms the makers of speech and preventing disfavored speech from reaching potential audiences (“censoring,” in the comprehensive statutory term), they are not themselves “speaking” for First Amendment purposes.
The notion here is that when private physical property, a mall in PruneYard, replaces public property as the “public square” such that the owners of private property get to decide who to allow on so that their voices are heard, they assume the trapping of government and become subject to the First Amendment’s prohibition against censorship.
In particular, it is ludicrous to assert, as NetChoice does, that in forbidding the covered platforms from exercising viewpoint-based “censorship,” the platforms’ “own speech” is curtailed. But for their advertising such “censorship”—or for the censored parties’ voicing their suspicions about such actions—no one would know about the goals of their algorithmic magic. It is hard to construe as “speech” what the speaker never says, or when it acts so vaguely as to be incomprehensible. Further, the platforms bestride a nearly unlimited digital world in which they have more than enough opportunity to express their views in many ways other than “censorship.” The Texas statute regulates none of their verbal “speech.” What the statute does, as Judge Oldham carefully explains, is ensure that a multiplicity of voices will contend for audience attention on these platforms. That is a pro-speech, not anti-free speech result.
By forcing platforms to host speech that it chooses not to host, whether it’s pedo Nazis or transgender racism hunters, these private entities are being compelled to speak, not because they are personally saying something but because their interface is being forced to leave unmolested a lengthy explanation of the virtues of QAnon. Hundreds of them, perhaps.
The irony is that the argument was made to the court that this will not merely preclude them from “censoring” speech such as “Trump really won the election with space aliens” but also from people who argue that the law prohibiting man/boy love is wrong.
The Platforms do not directly engage with any of these concerns. Instead, their primary contention—beginning on page 1 of their brief and repeated throughout and at oral argument—is that we should declare HB 20 facially invalid because it prohibits the Platforms from censoring “pro-Nazi speech, terrorist propaganda, [and] Holocaust denial[s].” Red Br. at 1. Far from justifying pre-enforcement facial invalidation, the Platforms’ obsession with terrorists and Nazis proves the opposite. The Supreme Court has instructed that “[i]n determining whether a law is facially invalid,” we should avoid “speculat[ing] about ‘hypothetical’ or ‘imaginary’ cases.” Wash. State Grange, 552 U.S. at 449–50. Overbreadth doctrine has a “tendency . . . to summon forth an endless stream of fanciful hypotheticals,” and this case is no exception. United States v. Williams, 553 U.S. 285, 301 (2008). But it’s improper to exercise the Article III judicial power based on “hypothetical cases thus imagined.”
The reason such “fanciful hypotheticals” are raised is to present the point as clearly, if sometimes a bit hyperbolically, as possible that bad shit will happen if a bad law is allowed. Judge Oldham wasn’t buying, there being no “case and controversy” before the court about these make-believe “terrorists and Nazis” and he won’t suffer such flights of fantasy before the law is even enforced. Because, you know, there are no such things as “terrorists and Nazis” who post crap on the interwebs, but Trump had a big boat parade that only lost one boat so how could he even lose?
I’m disinclined to presume that any judge appointed by Trump is a bad judge, or any judge not appointed by Trump is good (or better). But this is a mindnumbingly dumb decision, and Edith Jones should be ashamed of theyself.
As an interested software developer, legal layman, it seems like such a strange ruling that conflicts first of all with §230 even before it gets to the First Amendment.
(But as a legal naif who has been schooled many times by Law Twitter how Pruneyard is dead, well, I admit to a bit of fondness seeing that zombie come back to life)
Back to my software dev hat, I do think the thing to in an Internet Century is to somehow recognize that an individuals free speech rights must allow them to host a website (and implement payment transactions) on that website.
So I wouldn’t force a social media company to host a user’s speech, nor would I force a hosting company to host a person’s site, but I would force DNS services providers, some tier of Internet security companies (like Cloudflare’s DDOS protection) as well as some form of rudimentary payment processor to have viewpoint neutral policies wrt their customers.
If a KiwiFarms wants to host their site out of their living room, they can do so provided they can get an IP address, be protected from massive attacks on them, and conduct e-commerce across it, and if people dislike that they can get a court to take those sites down, not exact a punishment via backroom conversations of modern day robber barons.
However, I would look at ways to use consumer protection laws to make the twitters, googles, et. al, have to respond to customers appealing various punishments and bans in a timely, open fashion.
Having been around when PruneYard was decided, the significance of the decision relied on the unusual circumstances of the times. Malls became the place to go, and they were being built everywhere and everyone was going to malls to shop. At the time, it appeared as if the game was over and malls won, and they would remain the primary place where people gathered in perpetuity. So the Supreme Court fudged the rationale because, well, if this was how it was going to be forever, then it had to be addressed. The public square that the founding fathers were protecting with the First Amendment was now in private hands. It had to be fixed.
Of course, we now know that malls aren’t the end of the road, and yet the same it happening again with twitter and facebook.
What the court misses is that Twitter is not the public square. The Internet is not “the” but rather “a” public square, and maybe not even that. If one wants to hang out on the Twitter part of a public square, that’s your choice. If one wants to hang out at the Parler or Truth Social part of a public square, that’s your choice. And if one wants to hang out at the local mall, that’s your choice.
And is there any evidence that “email providers, mobile phone companies, and banks [do] cancel the accounts of anyone who sends an email, makes a phone call, or spends money in support of a disfavored political party, candidate, or business.” This seems hypothetical to me and weird in a ruling where they claim that what does happen in reality on social media is the hypothetical.
The Fifth Circuit used bad facts and willful misreadings of the law to reach the conclusion they wanted.
Payment processors stopped processing porn sites even though the sites were engaging in 1a protected speech. I believe they also stopped processing Alex Jones and other members of the internet cancelled sets. It’s why some sites were driven to crypto.
Certainly not all porn, though. I hadn’t heard that Alex Jones and his ilk had been dropped. But those cases go far beyond “sends an email” or “makes a phone call” at least. One of the porn sites was accused of distributing child porn, wasn’t it? And was Alex Jones dropped before or after the events of January 6? Few businesses want to go anywhere near even serious accusations of violations of the law.
I’ve seen no evidence that “an email” or “a phone call” with 1a protected content will cost you your email provider or phone service. The court makes it sound like we are all in danger if we donate money to a “disfavored political party”. It’s hyperbole from the court at the very least.
Even so, it’s odd to argue that social media sites can’t moderate as they see fit because other businesses do things that are perfectly legal.
I would actually say that Pornhub and OnlyFans are pretty big examples of at the very least banks (or in this case credit card processors) either threatened too or temporarily cutting off dealings with them over allegations of child pornography and other sexual explotation leveled against them by a groups like the one formerly known as “Morality in Media” but renamed to “National Center on Sexual Exploitation” who are much more focused on anti-pornography causes generally as well as things like sex ed.
I apologize that I can’t find a source for this one so feel free to take it with a grain of salt but I have also seen comments that according to groups that actually focus on removing child pornography from the internet that Pornhub is actually much more responsive and has less of it then sites like Facebook and Twitter.
If pornography doesn’t count as a disfavored business and Visa and Mastercard don’t count as banks then I got nothing currently off the top of my head.
I suppose I should mention that calling “pro-Nazi speech, terrorist propaganda, [and] Holocaust denial[s].” a hypothetical is umm a very interesting world view to put it kindly to address the other half of that in case someone thinks I agree with everything said in that decision (gertruding complete).
Was this a post about the merits of payment processors? No. No it was not.
Just because there’s a mention doesn’t mean you have to dive down that rabbit hole. Save it for another day
I understand that your analysis of the decision is correct, but given that the social media companies were originally doing it in cahoots with one of the political parties and are now doing it under (arguably unconstitutional) pressure from the White House I’m having a problem generating much sympathy for them.
Sympathy for the devil?
Isn’t this decision in accordance with public accommodation laws? If a social media platform can ban someone because it disagrees with their expressed beliefs why can’t a hotel do so too? Conversely if a hotel has to accommodate a well known Nazi, why not Twitter too?
Social media certainly seems to be acting as a modern day town square, but it seems to me that the answer to these questions lies in how you view their role.
No. Not at all.