A convicted sex offender who created over 1,000 indecent images of children has been forbidden from using any “AI creation tools” for the next five years, marking a significant case in this realm.
Anthony Dover, 48, was instructed by a British court in February not to use artificial intelligence-generated tools without prior police authorization, as part of a sexual harm prevention order issued in February.
The prohibition extends to tools like text-image generators that produce realistic-looking photos from written commands, as well as the manipulation of websites used to generate explicit “deepfake” content.
Mr. Dover, who received a community order and a £200 fine, was specifically directed not to utilize the Stable Diffusion software known to be exploited by pedophiles to create surreal child sexual abuse material.
This case is part of a series of prosecutions where AI-generated images have come to the forefront, prompting warnings from charities regarding the proliferation of such images of sexual abuse.
Last week, the government announced the creation of a new crime that makes it illegal to produce sexually explicit deepfakes of individuals over 18 without their consent, with severe penalties for offenders.
Using synthetic child sexual abuse material, whether real or AI-generated, has been illegal under laws since the 1990s, leading to recent prosecutions involving lifelike images produced using tools like Photoshop.
These tools are increasingly being used to combat the dangers posed by sophisticated synthetic content, as evidenced by recent court cases involving the distribution of such images.
The Internet Watch Foundation (IWF) emphasized the urgent need to address the production of AI-generated child sexual abuse images, warning about the rise of such content and its chilling realism.
Law enforcement agencies and charities are working to tackle this growing trend of AI-generated images, with concerns rising about the production of deepfake content and the impact on victims.
Efforts are underway to address the growing concern over AI-generated images and deepfake content, with calls for technology companies to prevent the creation and distribution of such harmful material.
The decision to restrict adult sex offenders from using AI tools may pave the way for increased surveillance of those convicted of indecent image offenses, highlighting the need for proactive measures to safeguard against future violations.
While restrictions on internet use for sex offenders have existed, limitations on AI tools have not been common, underscoring the gravity of this case and its implications for future legal actions.
The company behind Stable Diffusion, Stability AI, has taken steps to prevent abuse of their software, emphasizing the importance of responsible technology use and compliance with legal guidelines.
Source: www.theguardian.com