Warning from child safety experts reveals that criminals are using artificial intelligence to generate sexually explicit images of children at an alarming rate, surpassing law enforcement capabilities to rescue real victims.
Prosecutors and child safety organizations are concerned that AI-generated images are so realistic that it’s becoming difficult to distinguish them from actual abuse. These images can flood the dark web and mainstream internet quickly, with a single AI model producing thousands of new images.
According to Christina Korobov of the Zero Abuse Project, AI-generated images are now using non-abused children’s faces in abuse imagery, making it challenging to identify and prosecute offenders.
Law enforcement struggles to investigate the tens of millions of reports of actual child sexual abuse material (CSAM) shared online yearly. This issue is expected to worsen as AI-driven content increases.
The Justice Department prosecutor warns that crimes against children are under-resourced, and the rise of AI content will lead to an explosion of cases that are harder to prosecute.
AI is being used by criminals in various ways, including generating child abuse images from text prompts and altering existing files into explicit content. Without proper legislation, prosecuting offenders for possessing AI-generated CSAM is challenging.
Source: www.theguardian.com