Child safety experts have issued a warning about criminals on the dark web increasingly using artificial intelligence for creating sexually explicit images of children, focusing on “star” victims. These individuals fixate on child victims who are considered popular in the predator community, using AI to generate new images from existing child sexual abuse material.
Organizations that monitor predator activity in dark web forums have noted a rise in discussions about AI-generated images and the fixation on specific child victims. The accessibility of AI technology allows predators to manipulate old images and fabricate new ones, leading to concerns about the impact on survivors and the potential threats to their lives and reputations.
Victims of child sexual abuse are particularly vulnerable to the misuse of AI technology, as their images can be manipulated without consent, potentially perpetuating their trauma and endangering their privacy and safety.
The anonymity provided by dark web browsers makes it challenging for child safety groups to track or remove these images, highlighting the need for legislative measures to prevent the creation and dissemination of AI-generated child sexual abuse material.
Megan, a survivor of CSAM, expressed fears about the potential use of AI technology to further exploit her image and the psychological impact such manipulation could have on her wellbeing.
Efforts to combat the proliferation of AI-generated child sexual abuse material face challenges due to the encrypted nature of communication on the dark web, making enforcement difficult. Advocates are calling for regulatory actions beyond criminalization to address this issue, but acknowledge the complex hurdles in preventing the creation and distribution of such harmful content.
Source: www.theguardian.com