When an argument for criminalizing conduct begins with the appeal to emotion, “We’re fighting for our children,” it’s almost certainly calling for bad law. But when it comes to “deepfake”** nudes of women, particularly minors, does that change the calculus?
The problem with deepfakes isn’t new, but experts say it’s getting worse as the technology to produce it becomes more available and easier to use. Researchers have been sounding the alarm this year on the explosion of AI-generated child sexual abuse material using depictions of real victims or virtual characters. In June, the FBI warned it was continuing to receive reports from victims, both minors and adults, whose photos or videos were used to create explicit content that was shared online.
While some extol the virtues of generative AI, few doubt that it can just as easily be used for bad as well as mediocre. While some worry about the end of the human race, parents of young women worry about someone putting the head or face of their child on a naked body for prurient purposes. And, unsurprisingly, they are angry and disturbed by it.
Several states have passed their own laws over the years to try to combat the problem, but they vary in scope. Texas, Minnesota and New York passed legislation this year criminalizing nonconsensual deepfake porn, joining Virginia, Georgia and Hawaii who already had laws on the books. Some states, like California and Illinois, have only given victims the ability to sue perpetrators for damages in civil court, which New York and Minnesota also allow.
A few other states are considering their own legislation, including New Jersey, where a bill is currently in the works to ban deepfake porn and impose penalties — either jail time, a fine or both — on those who spread it.
It’s one thing to ban deepfake porn and give rise to a civil action for damages, but it’s quite another to criminalize it. At the same time that many call for the reduction or elimination of crimes, others want new crimes to address new wrongs that are emerging from new technologies. And still others want to see the crimes prosecuted federally, because who doesn’t want to put a sixth-grader into Super Max?
If officials move to prosecute the incident in New Jersey, current state law prohibiting the sexual exploitation of minors might already apply, said Mary Anne Franks, a law professor at George Washington University who leads Cyber Civil Rights Initiative, an organization aiming to combat online abuses. But those protections don’t extend to adults who might find themselves in a similar scenario, she said.
The best fix, Franks said, would come from a federal law that can provide consistent protections nationwide and penalize dubious organizations profiting from products and apps that easily allow anyone to make deepfakes. She said that might also send a strong signal to minors who might create images of other kids impulsively.
If the nude images are fake, do they exploit any living person? Is there a reason why the better solution isn’t to shrug and say, “it ain’t real,” and walk away? Aside from the sensitivity of young women to sexually-related matters, does a fake nude do any real harm? Does it do enough harm to warrant putting a high school classmate in prison or saddling him with a criminal conviction for a sex offense in perpetuity?
And what about the First Amendment implications of such a law? While the details of Mary Anne Franks’ dream crime remain unknown, it’s a certainty that it will run roughshod over the First Amendment given Franks’ loathing of free speech that makes her sad.
There is nothing about nude images, per se, that removes them from First Amendment protection. Why would adding the face of a real person to the image of a nude body change the protection of the First Amendment, as icky as it may be to think about what some schoolmate might be doing while eyeing the image. Of course, that didn’t stop President Biden from issuing an Executive Order banning it.
President Joe Biden signed an executive order in October that, among other things, called for barring the use of generative AI to produce child sexual abuse material or non-consensual “intimate imagery of real individuals.” The order also directs the federal government to issue guidance to label and watermark AI-generated content to help differentiate between authentic and material made by software.
If “deepfake” nudes with the heads of real people were required to have a watermark, would that be sufficient to fix the problem, to enable the deep shrug instead of outrage? But then, what would be the consequence if someone failed to watermark the image? Are we back to criminalizing it? Is this the way to address the problem? Are there any viable alternatives. And what other protected speech would get swept into a law that would make Franks smile?
*Tuesday Talk rules apply.
**Why “deepfake” rather than just fake?