David French makes an emotional appeal to hold TikTok liable for the tragic asphyxiation death of a 10-year-old girl, Nylah Anderson, who took the “so-called blackout challenge.” It is, without a doubt tragic and horrible, as the facts leave no doubt.
In 2021, a 10-year-old girl named Nylah Anderson was viewing videos on TikTok, as millions of people do every day, when the app’s algorithm served up a video of the so-called blackout challenge on its “For You Page.” The page suggests videos for users to watch. The blackout challenge encourages users to record themselves as they engage in self-asphyxiation, sometimes to the point of unconsciousness. Nylah saw the challenge, tried it herself and died. She accidentally hanged herself.
Nylah’s parents sued, and the Third Circuit upheld the suit. At Techdirt, Mike Masnick explains where the circuit went terribly wrong in its decision.
We’ve already hit (and not for the last time) the key problem with the Third Circuit’s analysis. “Given … that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms,” the court declared, “it follows that doing so amounts to first-party speech under [Section] 230, too.” No, it does not. Assuming a lack of overlap between First Amendment protection and Section 230 protection is a basic mistake.
Section 230(c)(1) says that a website shall not be “treated as the publisher” of most third-party content it hosts and spreads. Under the ordinary meaning of the word, a “publisher” prepares information for distribution and disseminates it to the public. Under Section 230, therefore, a website is protected from liability for posting, removing, arranging, and otherwise organizing third-party content. In other words, Section 230 protects a website as it fulfills a publisher’s traditional role. And one of Section 230’s stated purposes is to “promote the continued development of the Internet”—so the statute plainly envisions the protection of new, technology-driven publishing tools as well.
But David focuses not on the scope of Section 230’s protection on the whole, but on the need to prevent tragedy.
But does TikTok have any responsibility? After all, it not only hosted the video. According to the claims in the legal complaint Nylah’s mother filed, TikTok’s algorithm repeatedly put dangerous challenges on Nylah’s For You Page. To continue with the offline analogy, imagine if an adult walked up to Nylah after school and said, “I know you, and I know you’ll like this video,” and then showed her a blackout challenge performed by somebody else.
In that circumstance, wouldn’t we hold the adult who presented the video to Nylah even more responsible than the person who actually made the video? The very fact that the recommendation came from an adult may well make Nylah more susceptible to the video’s message.
Algos aren’t people, no less adults, making adult decisions for children. Algos don’t think. They don’t feel. They don’t care. They are merely a set of coded instructions to give a user other content that, based upon what a user watches, would be of interest to the user. Granted, its purpose can be nefarious, to keep a user using, and using, and using, but that’s not the complaint.
In the offline world, the adult who presented the video to Nylah could well be liable for wrongful death, and no amount of objections that he just showed the child a video made by someone else would save him from liability. After all, he approached the child of his own volition and offered her the video unsolicited. That was the adult’s own speech, and adults are responsible for what they say.
In the offline world, an adult would have an appreciation of why certain content is inappropriate, even dangerous, for a particular user, such as a 10-year-old girl. If an adult pushed a dangerous TikTok on a child, the adult would certainly be responsible for the conduct. But algos make no determination of appropriateness. How could code make such a subjective determination? It’s just code.
As David notes, the Supreme Court’s 2023 Moody decision includes dicta suggesting that algorithms are expressive speech protected by the First Amendment.
But with legal rights come legal responsibilities. The First Amendment doesn’t permit anyone to say anything they’d like. If I slander someone, I can be held liable. If I traffic in child sex abuse material, I can be put in jail. If I harass someone, I can face legal penalties. Should the same rules apply to social media companies’ speech, including to their algorithms?
The Third Circuit said yes. One Obama appointee and two Trump appointees held that TikTok could be held potentially liable for promoting the blackout challenge, unsolicited, on Nylah’s page. It couldn’t be held liable for merely hosting blackout challenge content — that’s clearly protected by Section 230 — nor could it be held liable for providing blackout challenge content in response to a specific search.
It’s understandable that David employs analogies to construct an argument for liability for a heartbreaking tragedy, as there is no legal argument that could sustain the position. That there are traditional and extremely limited exceptions to the First Amendment is a common gambit for those who seek to extent the parameters of prohibited speech to new realms, new technologies, because not doing so can result in terrible consequences. “Do it for the children” is, perhaps, the most effective emotional appeal around, and has served as the basis for many limitations on freedom lest a single child be harmed.
Could the internet survive without algos? Sure, even if they alert users to content of interest they might otherwise never find. But that does not make algos into sentient adults with the capacity to make relative judgments as to what will cause any individual harm. It’s horrific that Nylah Anderson killed herself for something as utterly idiotic as the “blackout challenge.” The maker of that TikTok bears responsibility. Nylah’s parents bear responsibility. The algo, however, is not to blame for the mechanical act of giving a user what the user wants.
Come on now! You can’t have parents being responsible for monitoring their children’s activities and media consumption. That’d be responsible parenting and we can’t have that. People might start learning to not want the gubberment in every aspect of their lives.
In a way, the Internet is just intensifying the competition for Darwin awards. It presents the (sometimes inherently dangerous) conduct for emulation, but it is still the viewer who decides whether that is a good idea. There are no cliffs here in the flats, but when I was young, my parents nevertheless took the precaution of instructing me as to the pitfalls of jumping off cliffs just to emulate others. It seems to me this could be adapted to modern technology, with the Tik Tok challenges taking the place of the cliff-jumping in the lesson.
Even “in the order received ” is an algorithm.
Could Facebook work without a sophisticated algorithm? No. They get so many posts in a day that without an effective algorithm, you would be lucky to see one post of interest in your lifetime.
Oh you sweet summer child. As someone in the software industry, I can assure you these apps are far more than “just code.” I can also assure you that making such a subjective determination is, from a mere technical perspective, quite easy. Programmers do this shit every day, albeit with wildly different problem sets and scales. We also are trained to treat user input differently from system inputs, and the same ideas should hold here from a legal perspective. User requests for searches are wildly different from individualized feeds such as the For You Page which suggested the video in question.
TikTok categorizes all content on their platforms to the n-th degree. Any specific video or post might have dozens or even hundreds of categories assigned to it. This is unquestionably protected by Section 230. The idea being that it makes it easy for a user to say “give me cat videos where a dog gets kicked in the face and thrown into a snowbank,” and I’ll be damned if you don’t wind up with at least 8 hours of man’s best friend getting pummeled in Houghton, Michigan. Surely, a good indexing system helps drive engagement, as the ability to find what you want undoubtedly will keep you on the app as well as get you to come back. It’s why Google succeeded so damned well. Section 230 plainly protects curating content, and for good reason.
However, TikTok and systems like them categorize content AND users, and they employ armies of psychologists to help them do it, as well as figure out how to match the two together. They also use these psychologists to help them make their content feeds individually addictive and to manipulate users into viewing more content so they can sell advertising. That’s the game. It’s similar to the psychological tactics used to keep 70 year olds dumping their children’s inheritance into slot machines. This isn’t “just code.” It’s hijacking people’s brains via a custom Skinner Box.
These systems aren’t the way they are by happenstance. Users don’t stumble onto anything by accident. Content isn’t provided randomly or equally to all users. This shit is intentional. So, when you say TikTok had no idea who this user was, what this video was, or how appropriate it was for her, you couldn’t be more wrong. They absolutely had the video categorized as a “challenge” video, knowing full well what those things are. It almost certainly included “asphyxiation.” It probably included “danger,” “stunt,” and “peer pressure”. They also knew damned well that the user was a 10 year old girl, probably what city she lived in, and very possibly what school she attended. They knew her friend groups and probably which of them she liked the most. And while no one at TikTok ever explicitly said “show asphyxiation videos to 10 year old girls at UmptyScrunch Elementary School in Whoville”, there was absolutely a meeting where a group of folks decided that it would be OK to target “challenge” videos at preteens. There were almost certainly discussions around whether this audience might mimic what they saw. And psychologists who absolutely should have fucking known better assisted in this decision making process and failed (or refused) to prevent it.
TikTok shouldn’t be held accountable for hosting a tasteless video that some 10yo girl randomly found on her own and then imitated to her own peril, as that is protected by Section 230. But it should be held accountable for systematically designing an addictive product which intentionally and manipulatively puts this kind of video in front of this kind of audience with absolutely no concern for the obvious consequences that might result.