The Supreme Court has held that child pornography is an exception to the First Amendment, and few would argue otherwise. But it also limited the exception to actual kiddie porn, not fake computer generated porn where no child was sexually abuse.
The court held, 6 to 3, that the Child Pornography Prevention Act is overly broad and unconstitutional, despite its supporters’ arguments that computer-generated smut depicting children could stimulate pedophiles to molest youngsters.
“The sexual abuse of a child is a most serious crime and an act repugnant to the moral instincts of a decent people,” Justice Anthony M. Kennedy wrote in the majority decision. Nevertheless, he said, if the 1996 law were allowed to stand, the Constitution’s First Amendment right to free speech would be “turned upside down.”
Even then, however, C.J. Rehnquist realized that technology would soon come up with new and worse ways to create virtual porn where real children were put at risk.
Chief Justice William H. Rehnquist wrote the dissent. “Congress has a compelling interest in ensuring the ability to enforce prohibitions of actual child pornography, and we should defer to its findings that rapidly advancing technology soon will make it all but impossible to do so,” he wrote.
And while the Court held that part of the Child Pornography Prevention Act of 1996 was unconstitutional, it left a third section intact.
The High Court voided two sections of the law, but a third section was not challenged and is still in force. It bans some computer alterations of innocent pictures of children; grafting a child’s school picture onto a naked body, for example.
With the now ubiquitous availability of AI, we’re not only there, but inundated with AI generated fakes that use the face of a real child atop naked, and often sexual, images. Because of the ease with which these images can be generated, the problem has become overwhelming.
The images are indistinguishable from real ones, experts say, making it tougher to identify an actual victim from a fake one. “The investigations are way more challenging,” said Lt. Robin Richards, the commander of the Los Angeles Police Department’s Internet Crimes Against Children task force. “It takes time to investigate, and then once we are knee-deep in the investigation, it’s A.I., and then what do we do with this going forward?”
Law enforcement agencies, understaffed and underfunded, have already struggled to keep pace as rapid advances in technology have allowed child sexual abuse imagery to flourish at a startling rate. Images and videos, enabled by smartphone cameras, the dark web, social media and messaging applications, ricochet across the internet.
Investigation, not to mention prosecution, is not only difficult because of the ease of creation and sheer volume of images involved, but because technology has put up significant, perhaps insurmountable, hurdles.
The use of artificial intelligence has complicated other aspects of tracking child sex abuse. Typically, known material is randomly assigned a string of numbers that amounts to a digital fingerprint, which is used to detect and remove illicit content. If the known images and videos are modified, the material appears new and is no longer associated with the digital fingerprint.
While United States IP addresses are hard enough to track, and change with every modification of an image, much of the problem derives from outside the United States, beyond the reach of our law enforcement in any event.
Adding to those challenges is the fact that while the law requires tech companies to report illegal material if it is discovered, it does not require them to actively seek it out.
And therein lies the rub, If laws and their enforcers can’t stop the creators of AI generated child porn, it can reach the transmitters of these images.
While more than 90 percent of CSAM [child sexual abuse material] reported to NCMEC is uploaded in countries outside the United States, the vast majority of it is found on, and reported by, U.S.-based online platforms, including Meta’s Facebook and Instagram, Google, Snapchat, Discord and TikTok.
And the leading “experts” dealing with this problem are not known for their concern for the “cultish” First Amendment.
Wednesday’s Senate hearing will test whether lawmakers can turn bipartisan agreement that CSAM is a problem into meaningful legislation, said Mary Anne Franks, professor at George Washington University Law School and president of the Cyber Civil Rights Initiative.
“No one is really out there advocating for the First Amendment rights of sexual predators,” she said. The difficulty lies in crafting laws that would compel tech companies to more proactively police their platforms without chilling a much wider range of legal online expression.
The implications for vague and overbroad laws are one thing. Should that adorable picture of little Timmy playing with his rubber ducky in the bathtub you sent to Aunt Sadie land you in federal prison for half a decade? Should tween Timmy be sent to reform school for putting Hannah Montana’s face on Taylor Swift’s naked body? But coopting internet enterprises as the pornography police upon pain of prosecution or liability presents a huge incentive for Meta, etc., to shut down anything that remotely seems wrong to its algos. And when it does, to whom do you complain?
Much like Franks’ last jihad, revenge porn, which meshes unsurprisingly well with her latest foray into internet censorship, free speech takes a distant back seat to the fears and harms generated by AI fake child porn. As she correctly notes, crafting laws that don’t violate the First Amendment will be difficult, if not impossible. But which will Congress give away, the First Amendment or AI-generated fake child porn?
Discover more from Simple Justice
Subscribe to get the latest posts sent to your email.

There needs to be legislation addressing computer generated child porn. But I’m nowhere near savvy enough to know how to write it and I have serious doubts that anyone in Washington does either.
They can probably still regulate it to the extent it meets the Miller test for “obscenity.” At least, to the extent that whole concept retains any vitality. The thing is, we are well down the road of allowing every old “morals” type offense that arguably “isn’t hurting anybody.” To the extent images for the “child attracted” set can be generated entirely by AI, we should anticipate tacking another letter onto the “LGBTQ” acronym.
A lot of photographers and camera manufacturers want to distinguish their images from AI generated images. Likewise, many publishers of images (i.e. news companies) share the concern, as they want to have accurate representations of real events. This is strong motivation to come up with standards for metadata that can be embedded in images and authenticated, proving an image was produced with a camera. Of course, a child pornographer could strip such metadata from an image. Any consumer of child pornography who requires such metadata to be present in order to enjoy the image goes to the top of my prosecution list, so that’s not entirely a bad thing. If Frank’s goal is a general war on freedom of speech, then I would advocate for the camera user’s identity to be part of the standard metadata. After all, most photographs are taken with cellphones, which know a lot about the identity of their owners and operators. I think there might be good commercial reasons for that policy as well. Can congress pass laws regarding such metadata without directly implicating the first amendment? Can they pass laws about hacking such metadata to circumvent it’s authenticity guarantees, so that the AI pornbots can’t produce images that claim to be created from a camera produced by manufacturer X?
Laws can be passed to do anything that the naive legislator believes is a “good” idea. Whether it will achieve anything is another matter.
Current encryption and signing technologies depend on the difficulty of factorizing very large numbers but there are those who believe that quantum computing is possible and if it eventuates and can indeed factorize large numbers current means of certifying documents and encrypting information are dead.
In the United States, we have a Constitution that limits what laws can be passed.