If you are going to contact the police and find someone who has expressed an interest in child sexual abuse (CSAM) content with you, then perhaps it is not the best idea to have the same content on your own device. or provide additional consent to the search so that law enforcement can gather additional information. But that's exactly what one Alaskan man did. It led to him being detained by police.
404 media Reported Earlier this week it was about one man, Anthaney O'Connor, who ended up being arrested after police searched his device that allegedly revealed AI-generated child sexual abuse content (CSAM).
From 404:
according to newly filed Charge documentsAnthanay O'Connor contacted law enforcement in August to alert them to an unidentified pilot who had shared child sexual abuse material (CSAM) with O'Connor. while investigating a crime and with O'Connor's consent Federal officials searched his phone for more information. An examination of the electronics revealed that O'Connor allegedly offered to create a virtual CSAM for pilots. According to the criminal complaint
According to the police The unidentified pilot shared images of children in a grocery store with O'Connor, and the two discussed how they could introduce minors into the explicit virtual world.
Law enforcement claims to have found at least six explicit AI-generated CSAM images on O'Connor's device, which he said were intentionally downloaded. along with several “real” images that were accidentally mixed up. Upon searching O'Connor's home, law enforcement officers discovered a computer along with several hard drives hidden in the home's ventilation shaft. A computer investigation allegedly revealed a 41-second video of child rape.
In an interview with the official O'Connor said he regularly reports CSAM to his internet service provider. “But he continued to receive sexual gratification from the images and videos.” It is not clear why he decided to report the pilot to law enforcement. Maybe he has a sense of guilt, or maybe he truly believes his AI CSAM isn't breaking the law.
AI image generators are typically trained using real photographs. This means that the AI-generated “pictures” of children are based on real images. There is no way to separate the two. AI-based CSAM is not a victimless crime in that sense.
The first such arrest of an individual for possession of AI-generated CSAM occurred recently. Back in May When the FBI arrested a man for using Stable Diffusion to create “thousands of realistic images of prepubescent minors.”
Proponents of AI will say that creating explicit images of minors using Photoshop has always been possible. But AI tools make it exponentially easier for anyone to do so. The latest report found that One of the six members of the House of Representatives Targeted by AI-generated deepfake porn, many products have guardrails to protect against the worst uses. It's similar to how printers don't allow photocopies of currency. Using barriers can at least prevent some of this behavior.