Legal fight against AI-generated child pornography is complicated – a legal scholar explains why, and how the law could catch up

Legal fight against AI-generated child pornography is complicated ...

The city of Lancaster, Pennsylvania, was shaken by revelations in December 2023 that two local teenage boys shared hundreds of nude images of girls in their community over a private chat on the social chat platform Discord. Witnesses said the photos easily could have been mistaken for real ones, but they were fake. The boys had used an artificial intelligence tool to superimpose real photos of girls’ faces onto sexually explicit images.

With troves of real photos available on social media platforms, and AI tools becoming more accessible across the web, similar incidents have played out across the country, from California to Texas and Wisconsin. A recent survey by the Center for Democracy and Technology, a Washington D.C.-based nonprofit, found that 15% of students and 11% of teachers knew of at least one deepfake that depicted someone associated with their school in a sexually explicit or intimate manner.

The Supreme Court has implicitly concluded that computer-generated pornographic images that are based on images of real children are illegal. The use of generative AI technologies to make deepfake pornographic images of minors almost certainly falls under the scope of that ruling. As a legal scholar who studies the intersection of constitutional law and emerging technologies, I see an emerging challenge to the status quo: AI-generated images that are fully fake but indistinguishable from real photos.

Policing child sexual abuse material

While the internet’s architecture has always made it difficult to control what is shared online, there are a few kinds of content that most regulatory authorities across the globe agree should be censored. Child pornography is at the top of that list.

For decades, law enforcement agencies have worked with major tech companies to identify and remove this kind of material from the web, and to prosecute those who create or circulate it. But the advent of generative artificial intelligence and easy-to-access tools like the ones used in the Pennsylvania case present a vexing new challenge for such efforts.

In the legal field, child pornography is generally referred to as child sexual abuse material, or CSAM, because the term better reflects the abuse that is depicted in the images and videos and the resulting trauma to the children involved. In 1982, the Supreme Court ruled that child pornography is not protected under the First Amendment because safeguarding the physical and psychological well-being of a minor is a compelling government interest that justifies laws that prohibit child sexual abuse material.

That case, New York v. Ferber, effectively allowed the federal government and all 50 states to criminalize traditional child sexual abuse material. But a subsequent case, Ashcroft v. Free Speech Coalition from 2002, might complicate efforts to criminalize AI-generated child sexual abuse material. In that case, the court struck down a law that prohibited computer-generated child pornography,…

Access the original article

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :