Tuesday Talk*: Should Deepfake Nudes Be Criminalized?

When an argument for criminalizing conduct begins with the appeal to emotion, “We’re fighting for our children,” it’s almost certainly calling for bad law. But when it comes to “deepfake”** nudes of women, particularly minors, does that change the calculus?

The problem with deepfakes isn’t new, but experts say it’s getting worse as the technology to produce it becomes more available and easier to use. Researchers have been sounding the alarm this year on the explosion of AI-generated child sexual abuse material using depictions of real victims or virtual characters. In June, the FBI warned it was continuing to receive reports from victims, both minors and adults, whose photos or videos were used to create explicit content that was shared online.

While some extol the virtues of generative AI, few doubt that it can just as easily be used for bad as well as mediocre. While some worry about the end of the human race, parents of young women worry about someone putting the head or face of their child on a naked body for prurient purposes. And, unsurprisingly, they are angry and disturbed by it.

Several states have passed their own laws over the years to try to combat the problem, but they vary in scope. Texas, Minnesota and New York passed legislation this year criminalizing nonconsensual deepfake porn, joining Virginia, Georgia and Hawaii who already had laws on the books. Some states, like California and Illinois, have only given victims the ability to sue perpetrators for damages in civil court, which New York and Minnesota also allow.

A few other states are considering their own legislation, including New Jersey, where a bill is currently in the works to ban deepfake porn and impose penalties — either jail time, a fine or both — on those who spread it.

It’s one thing to ban deepfake porn and give rise to a civil action for damages, but it’s quite another to criminalize it. At the same time that many call for the reduction or elimination of crimes, others want new crimes to address new wrongs that are emerging from new technologies. And still others want to see the crimes prosecuted federally, because who doesn’t want to put a sixth-grader into Super Max?

If officials move to prosecute the incident in New Jersey, current state law prohibiting the sexual exploitation of minors might already apply, said Mary Anne Franks, a law professor at George Washington University who leads Cyber Civil Rights Initiative, an organization aiming to combat online abuses. But those protections don’t extend to adults who might find themselves in a similar scenario, she said.

The best fix, Franks said, would come from a federal law that can provide consistent protections nationwide and penalize dubious organizations profiting from products and apps that easily allow anyone to make deepfakes. She said that might also send a strong signal to minors who might create images of other kids impulsively.

If the nude images are fake, do they exploit any living person? Is there a reason why the better solution isn’t to shrug and say, “it ain’t real,” and walk away? Aside from the sensitivity of young women to sexually-related matters, does a fake nude do any real harm? Does it do enough harm to warrant putting a high school classmate in prison or saddling him with a criminal conviction for a sex offense in perpetuity?

And what about the First Amendment implications of such a law? While the details of Mary Anne Franks’ dream crime remain unknown, it’s a certainty that it will run roughshod over the First Amendment given Franks’ loathing of free speech that makes her sad.

There is nothing about nude images, per se, that removes them from First Amendment protection. Why would adding the face of a real person to the image of a nude body change the protection of the First Amendment, as icky as it may be to think about what some schoolmate might be doing while eyeing the image. Of course, that didn’t stop President Biden from issuing an Executive Order banning it.

President Joe Biden signed an executive order in October that, among other things, called for barring the use of generative AI to produce child sexual abuse material or non-consensual “intimate imagery of real individuals.” The order also directs the federal government to issue guidance to label and watermark AI-generated content to help differentiate between authentic and material made by software.

If “deepfake” nudes with the heads of real people were required to have a watermark, would that be sufficient to fix the problem, to enable the deep shrug instead of outrage? But then, what would be the consequence if someone failed to watermark the image? Are we back to criminalizing it? Is this the way to address the problem? Are there any viable alternatives. And what other protected speech would get swept into a law that would make Franks smile?

*Tuesday Talk rules apply.

**Why “deepfake” rather than just fake?

11 thoughts on “Tuesday Talk*: Should Deepfake Nudes Be Criminalized?

  1. Hal

    On a related note, if “deep fake” nude images are criminalized, should aesthetics (i.e., how hot the images are), be considered an aggravating or mitigating factor in determining sentencing?

    Asking for a friend.

  2. Hunting Guy

    Many Ann Franks.

    “She said that might also send a strong signal to minors who might create images of other kids impulsively.”

    She obviously has no children.

  3. Miles

    I can understand why this would be disturbing to a parent, but from a legal perspective, I don’t see how this differs from an array of “deep [no clue why it’s “deep”] fakes” that involve a face on a Nazi body, or KKK body, etc. There are an infinite number of potential fakes to be created with real faces, many of which would be disturbing.

    While Franks may obsess about women and nudity, why would these fakes be any different or worse than Nazi or Klan fakes?

  4. Ken Hagler

    The name “deepfake” is a reference to an AI technique called deep learning, which is used to generate them. It was coined to distinguish them from fakes done using the old fashioned technique of an artist drawing or painting a picture.

  5. JB

    It’s an interesting question, and I’ve struggled with squaring the circle on this one.

    What do most normal people want? They want a world where children are not harmed by the prurient interests of others.

    I think most people agree that the most damaging harm is when a pedophile chooses to sexually abuse children (with a very broad definition of abuse).

    The argument against the legality of (real) child porn is three-fold
    1) It exploits the child (no consent)
    2) The knowledge of the existence of it may cause psychological harm to the child
    3) It may provoke the pedophile into sexually abusing a child

    So let us consider deepfake porn
    1) Putting the face of a real child on a fake picture is (in my opinion) still exploiting the child, but it’s a much, much weaker form of exploitation than taking real pictures
    2) If the child becomes aware of this, it will probably still cause psychological harm, but probably not quite as much (IMO, most people can recognize that a faked image is not quite as bad as a real one)
    3) It will probably have the exact same provocation for a pedophile. It might even have a larger provocation, assuming the pedophile uses the faces of children he/she knows, rather than strangers

    So, it seems to me there are two scenarios of someone caught with deepfake child porn:
    A) The person has not abused any children in the images.
    B) The person has abused some or all of them.

    Scenario A) is really challenging to me, because the child wasn’t particularly harmed. On the other hand, who knows what might happen with the pedophile in the future? IMO, the pedophile should have a restraining order against contact with the deepfaked children, out of an abundance of caution. I’m not 100% sold on my opinion here, it might be too harsh or too lenient.

    Scenario B) indicates that this person is exceptionally dangerous. Personally, I favor the death penalty for people like this, but I know that’s extreme, and I’m willing to compromise on other mechanisms that ensure that this person will never be in a situation where they can abuse children in the future.

    1. SlimTim

      The ban would likely only have an impact in scenario A. In scenario B the person has abused children and concerns over possessing deepfake child porn are minor in comparison to that. That person can still be heavily punished for the abuse w/o a law banning the possession of deepfake child porn.

  6. LocoYokel

    Deep fake is a term coined with the increasing sophistication of generated imagry. It used to be relatively easy to detect manipulated images but with the rise of more advanced tools and no ad generation it is becoming much more difficult. Deep fake was coined when manipulated or generated images that are almost impossible to differentiate from real camera captured images even with advanced detection methods started to appear.

  7. LocoYokel

    Trying to thread a couple of needles here.

    Fully generated, artificial images, disgusting as the concept seems are basically no harm, no foul and may even provide a safety valve of sorts. Start putting pictures of real faces on them and I don’t see why some form of existing [Ed. Note: Deleted so as not to make anyone stupider. LY, never talk about law. Never. Ever. Never.]

    Smarter people than me can figure out the details but it seems to me to at least be a place to start that exists within current 1st amendment policies.

    Or maybe I I’m just failing to get it again.

  8. j a higginbotham

    But the possibility of deepfake pictures and videos permits plausible deniability when real pictures and videos surface.

    “The perverts and losers at the failed and once disbanded Lincoln Project, and others, are using A.I. (Artificial Intelligence) in their Fake television commercials in order to make me look as bad and pathetic as Crooked Joe Biden, not an easy thing to do,”

Comments are closed.