Social Media, The Constitution And The Weapons Of Mind Destruction

I tried infinite scrolling, and frankly found it to be boring and a poor use of time. Sure, there were some funny and interesting videos, but they ran dry rather quickly and the allure of the next good one wasn’t strong enough to prevent me from hitting the “x.” But that’s me, and I’m neither a teenager nor a digital native. Others have not been able to pull themselves away, and that’s the root of the problem.

Last week, juries in two different states delivered multimillion-dollar verdicts against Big Tech. A New Mexico jury handed down a $375 million verdict in a case brought by the state’s attorney general against Meta for enabling child sexual exploitation. The next day, a California jury awarded a young woman a combined $6 million in damages from Meta and YouTube for the allegedly addictive and mentally distressing properties of social media apps, including algorithmic curation and so-called infinite scroll, where the app continually provides you with new content as you scroll down the page.

Jon Haidt contends that this “addiction” to social media is doing vast harm to young people, and whether or not you agree with him entirely, he makes some very good points. David French, as a father of three, recognizes this, as many people cheer the verdicts as finally doing something to address the damage social media has done, and continues to do, to children.

I know that it’s easy to celebrate those verdicts. I’m a parent of three who’s seen what happens when a teenager becomes a “screenager” and buries his or her head in a smartphone, minute by minute, hour after hour. Looking around my community, I’ve seen the disconnection from the real world and the vulnerability to conspiracy theories and absurdly radical social and political movements.

I’m also a concerned citizen who read Jonathan Haidt’s transformative book, “The Anxious Generation,” and watched with alarm as sex, drugs and rock ’n’ roll — the concerns of previous generations of parents — have been replaced by the unholy trinity of anxiety, depression and suicidal ideation.

So this means the verdicts against the tech giants is a good thing? Well, yes in the sense that something needs to be done to stop the damage, and this is something. But then, there’s that nagging problem of the First Amendment.

A social media site isn’t a bottle of alcohol or a cigarette. It’s not delivering a drug. It’s delivering speech. Sometimes that speech is silly and harmless. Sometimes it is toxic and harmful. Sometimes it’s educational or inspiring. But it’s all speech, and in America speech traditionally can only be blocked, censored or regulated in the narrowest of circumstances.

But the problem isn’t the speech, per se, but the mechanisms by which the speech is delivered. Can’t the algos, the infinite scroll, the weapons of social media to keep the impressionable clicking and scrolling well into the night a different problem? Can’t speech be protected but not the weapons delivering speech?

Even the algorithm is a form of constitutionally protected speech. As I’ve explained before, in a 2024 Supreme Court case called Moody v. NetChoice, Justice Elena Kagan wrote for the majority that “expressive activity includes presenting a curated compilation of speech originally created by others.”

The algorithm, Justice Kagan explained, was comparable to the layout of a newspaper, where editors decide which stories to feature prominently, which stories belong on the back pages, and how to make the page attractive and readable so that more people will see the news.

My old buddy, Mike Masnick, who has put a great deal of thought into this issue, has offered a thought experiment to clarify the point.

Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?

This is not to say that fears about social media are a “hoax,” as some people like to claim when they want others to dismiss problems they can’t face, but that nobody would care about the algos or infinite scroll but for the content it delivers, and that makes it about the content. And the content, whether you want to admit it or not, is protected free speech.

But isn’t it worth it, at least this time, to turn a blind eye to the Constitution and save the children?

In the face of genuine social problems, it’s always tempting to cast off constitutional restraints. We fight this battle over crime all the time. Crime waves invariably lead to calls for crackdowns, but there are constitutional and unconstitutional (much less reasonable and unreasonable) ways of fighting crime.

An increased police presence in high-crime areas is invaluable. Race-based stop-and-frisk violates the Constitution and increases political division and public bitterness. Expanded drug treatment facilities can help address the demand for illegal drugs. Brutal prison conditions might punish convicts, but they violate our constitutional commitments to human dignity.

The problem with constitutionally protected rights is that they are constitutionally protected even when we might wish they weren’t. And, indeed, that’s what makes them matter, since no one needs the First Amendment to protect speech we like, but to protect speech we hate. Does that mean we and our progeny are basically screwed?

There are also constitutional and unconstitutional ways of ameliorating the harms of social media. Phone-free schools, for example, represent a content-neutral time, place and manner restriction that allows students to focus on education, their obvious primary obligation during school hours — not to mention that it helps them socialize face-to-face.

We can also hold social media platforms liable for their own speech in the same way that we can hold any other person or company accountable if they engage in slander, harassment, threats or any other expressive activity that fits the classic categories of unlawful expression.

If these self-help remedies pale in the face of the problem, or seem largely naive and ineffective, you are not alone. It’s not that things can’t be done, but that they won’t be done enough to fix what ails us and change the trajectory of social media. Perhaps these verdicts, and the potential for thousands more from children allegedly harmed by social media, will provide the incentive to the Zucks and the Page and Brins of the world to change their evil ways and tweak their weapons of mind destruction to do less harm, even if it means they make less money. The First Amendment means the government can’t make them, but perhaps they will see the virtue of keeping their users alive and sane as a better business model.


Discover more from Simple Justice

Subscribe to get the latest posts sent to your email.

5 thoughts on “Social Media, The Constitution And The Weapons Of Mind Destruction

  1. Nigel Declan

    As has been said more than once in this here hotel, the alternative to bad isn’t necessarily good. It can always get worse. Asserting that an algorithm (or, indirectly, the speech that it curates or promotes) is harmful is easy. Coming up with an alternative is a far, far more challenging task, especially when faced with the question of who is ultimately given the power to decide what sort of algorithmic curation (and, by extension, speech) is deemed legally acceptable and on what basis.

  2. Jeffrey M. Gamso

    “Perhaps these verdicts, and the potential for thousands more from children allegedly harmed by social media, will provide the incentive to the Zucks and the Page and Brins of the world to change their evil ways and tweak their weapons of mind destruction to do less harm, even if it means they make less money.”

    You’re kidding, right? That they’d trade cash for the public good? If you mean that, I’ve got a bridge to Brooklyn you might be interested in buying.

  3. Hunting Guy

    Something needs to be done about the children having access to sexual material.

    I don’t know what, but my 14 yo step-granddaughter was a victim of sex trafficking.

    We caught it before she met anyone, but they were trying to get her to run away and meet them.

    Her mother’s rights have been severed, and we had her as a foster. Because DCS (AZ CPS) had control of her they mandated that she attend a summer camp in spite of our saying it was inappropriate for her. At the camp she met with older girls from a lower social-economic strata and got introduced to “boyfriends” on Snapchat.

    She had a phone and met some “boyfriends” on the internet. In order to keep them as friends they encouraged her to send them photos. Those kind of photos.

    As enticement, they would DoorDash or Grubhub burgers and toys for her dog at 2-3 in the morning and have them delivered to the back gate. Needless to say, we were sleeping at the time.

    They also sent here dildos, lube, and skimpy outfits in return for photos of her private parts.

    We found the toys one day while cleaning her room and it all came out. The police were called, DCS threw a fit and she had several meltdowns as we took her phone.

    It didn’t work. She used her friends’ phones at school, stole phones and iPads, and one guy sent her a burner phone.

    The police came again and searched her room but didn’t find a phone. She had a meltdown and kicked a 2 foot hole in the wall. We still haven’t figured where she could have hidden it.

    We caught her with a phone and tried to take it, she threw a fit, threw a toy at me and I wound up with my head bleeding. Police and EMS were called again because she was out of control.

    She was sent to behavior facility for 30 days. They only treated her for trauma, not the sexual issues.

    After a couple of other stays, she was sent to a group home and nothing is being done for her education or counseling. She is about 4 years behind in reading and math.

    What did we do? We sent several messages to Snapchat with her information and they did nothing. The police sent warrants and found the false info on the men that were grooming her and that was a dead end.

    We were contacted by the FBI and told that one individual had been arrested with her photos on his computer but he was a collector and not actively grooming. anyone.

    So, First Amendment aside, what do we do?

    How do you do age verification? You can say that we needed to control her access, and we did after we found out about the activity, but how do you control her access at school?

    Sorry this is disjointed and long, but the subject hits home to me.

  4. Bryan Burroughs

    The reason no one is watching paint dry at the suggestion of algorithms is because the algorithms have been tuned to make money. No one wants to watch paint dry, so the algorithm doesn’t suggest it. But the moment people do, it will.

    But you know what the algorithm *does* suggest? Videos likely to cause suicidal ideation to teens and young adults whom FB knows for a fact are suicidal. Because their in-house psychologists calculated that those videos are likely to induce such people to watch a few more ads, even if those videos increase the chance that teens might blow their brains out.

    It’s not a fluke that harmful content is being shown to vulnerable people, *it’s explicitly by design*. That makes this fundamentally different than mere content being displayed to random passersby. It’s highly researched, highly targeted, highly specific, and completely intentional.

    There are very real harms here that these companies are well aware of, because they are researching how to get people addicted to the platforms. What you have wrong is the notion that this is a content delivery system. It’s not. It’s a system to deliver people to advertisers by hijacking the reward centers in their brains. The people are the product, the content is the bait.

    Put a different way, let’s take your analogy of the newspaper editors who carefully arranged layouts to get readers. Suppose instead they put high doses of nicotine in the ink, such that people would unwittingly become addicted to the newspaper. Is this a 1st Amendment issue, or a public health issue? Obviously, you wouldn’t ban newspapers as a response, you ban putting nicotine in the ink. These algos are the nicotine in the ink! Surely, if these companies can pay psychologists to make people addicted to their product, they can pay those exact same psychologists to make their product attractive without making it addictive.

    To be blunt, FB and TikTok’s algorithms are to Free Speech what the tobacco companies dumping extra nicotine into their cigarettes was to free enterprise.

Comments are closed.