AI-generated child porn would be illegal under GOP proposal

AI-generated images of child exploitation would be considered child pornography under a proposal before Arizona lawmakers. 

The legislation from Rep. Julie Willoughby, R-Chandler, adds “any computer generated image” to the state’s definition of child exploitation material. Violations of that law are a class 2 felony, with sentencing enhancements if the child is under 15 years of age.

Artificial Intelligence image generators have become a fast growing and controversial industry, as users have been quick to deploy the technology to create pornographic images of real people. Celebrities, like Taylor Swift, have been the victims of computer generated images depicting them in graphic scenarios.  

GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX

SUBSCRIBE

“What we are observing is that this technology is being misused by bad actors,” Dr. Rebecca Portnoff, Head of Data Science at Thorn, told the Arizona Mirror. Thorn is an international non-profit organization aimed at stopping child sex trafficking and child exploitation. 

Portnoff said that researchers are finding that bad actors are using free and open-source image generators to create more extreme content. In some cases, AI image generators were found to be using images from real child sexual abuse material, commonly referred to as CSAM, to train their software

Portnoff has been monitoring this “small but growing community” since 2022 and has found that AI generated content has not begun to flood the internet — yet. Portnoff and her colleagues have been engaging with AI developers and are currently looking at using machine learning as a possible way to mitigate this type of use. 

“I both feel encouraged to see that kind of interest and I recognize that the pace of this type of technological development is very fast,” Portnoff said, adding that they need to be “running to keep pace” with the growing issue. 

But is Willougby’s House Bill 2138 constitutional? 

In 2002, the U.S. Supreme Court struck down two provisions from the 1996 Child Pornography Prevention Act that prohibited computer generated images that “appear to be” of children. The court found the language to be too broad and could include depictions of Romeo and Juliet. 

However, child pornography is defined as obscene, meaning that it has no First Amendment protection, according to ASU professor and free speech expert Joseph Russomanno. 

“The Court struck down the law as violating the First Amendment because these [Computer Generated Images] do not meet the legal definitions of obscenity or child pornography, in large part because they are not images of real people who were being exploited,” Russomanno said. “However, the contemporary sophistication of computer-generated images, enhanced by AI, is arguably a game-changer.” 

AI tools using known photos of children and CSAM material to create new images wouldn’t have First Amendment protections, Russomanno said. 

“If these images are no longer generic, but are virtual reproductions of actual people, then it’s arguable that exploitation and harm have occurred,” Russomanno said. “A law prohibiting such images could conceivably withstand a constitutional challenge.”

The people using AI to create child pornography are moving fast. Researchers have found communities dedicated to creating such content, selling databases and images to help train AI to create more CSAM content. 

A report by the Internet Watch Foundation found more than 20,000 AI-generated images on one dark web CSAM forum during a one-month period. Of those images, at least 11,000 were deemed to “most likely be criminal,” meaning that they included children or appeared to use real child pornography as the basis for its creation. 

The images have been causing issues for researchers and investigators alike, who told the Washington Post that it is making it harder for victim identification and confuses existing programs that scan the web for CSAM content. 

It also isn’t just online adult predators creating this content, according to Portnoff. There have been some instances where children use AI image generators to create pornographic content of their peers, Portnoff said. 

“It is not too late,” Portnoff said. “We can course correct here.”

Emily Slifer, director of policy for Thorn, said that regulators are often “playing catch up” on emerging technologies and there is an onus for companies to build in safety protocols to try to prevent this type of content from being created. 

Slifer said that they encourage companies to take a “safety by design” approach to their programs. The core philosophy of safety by design is to put user safety first and foremost when making a product. 

Portnoff echoed these sentiments, adding that the ease of use on most AI image software means that developers need to be responsible with how the technology is used. 

Prominent AI image generating platforms, including Dream Studio, Getty Images and MidJourney, did not respond to questions about the legislation and what they are doing to mitigate this sort of use on their platforms. 

Willoughby, the bill’s sponsor, said she was unable to comment on the legislation. The bill has been assigned to the House Judiciary Committee but it is not clear if the panel will consider the legislation.

Comments are closed.