Under a new UK law, tech companies and child protection agencies will be granted the authority to test if artificial intelligence tools can create images of child abuse.
This announcement follows reports from a safety watchdog highlighting instances of child sexual abuse generated by AI. The number of cases surged from 199 in 2024 to 426 in 2025.
With these changes, the government will empower selected AI firms and child safety organizations to analyze AI models, including the tech behind chatbots like ChatGPT and image-generating devices such as Google’s Veo 3, to ensure measures are in place to prevent the creation of child sexual abuse images.
Kanishka Narayan, the Minister of State for AI and Online Safety, emphasized that this initiative is “ultimately to deter abuse before it happens,” stating, “Experts can now identify risks in AI models sooner, under stringent conditions.”
This alteration was made due to the illegality of creating and possessing CSAM. Consequently, AI developers and others will be prevented from producing such images during testing. Previously, authorities could only respond after AI-generated CSAM was uploaded online, but this law seeks to eliminate that issue by stopping the images from being generated at all.
The amendments are part of the Crime and Policing Bill, which also establishes a prohibition on the possession, creation, and distribution of AI models intended to generate child sexual abuse material.
During a recent visit to Childline’s London headquarters, Narayan listened to a simulated call featuring an AI-generated report of abuse, depicting a teenager seeking assistance after being blackmailed with a sexual deepfake of herself created with AI.
“Hearing about children receiving online threats provokes intense anger in me, and parents feel justified in their outrage,” he remarked.
The Internet Watch Foundation, which oversees CSAM online, reported that incidents of AI-generated abusive content have more than doubled this year. Reports of Category A material, the most severe type of abuse, increased from 2,621 images or videos to 3,086.
Girls are predominantly targeted, making up 94% of illegal AI images by 2025, with the portrayal of newborns to two-year-olds rising significantly from five in 2024 to 92 in 2025.
Kelly Smith, CEO of the Internet Watch Foundation, stated that these legal modifications could be “a crucial step in ensuring the safety of AI products before their launch.”
“AI tools enable survivors to be victimized again with just a few clicks, allowing criminals to create an unlimited supply of sophisticated, photorealistic child sexual abuse material,” she noted. “Such material commodifies the suffering of victims and increases risks for children, particularly girls, both online and offline.”
Childline also revealed insights from counseling sessions where AI was referenced. The concerns discussed included using AI to evaluate weight, body image, and appearance; chatbots discouraging children from confiding in safe adults about abuse; online harassment with AI-generated content; and blackmail involving AI-created images.
From April to September this year, Childline reported 367 counseling sessions where AI, chatbots, and related topics were mentioned, a fourfold increase compared to the same period last year. Half of these references in the 2025 sessions pertained to mental health and wellness, including the use of chatbots for support and AI therapy applications.
Source: www.theguardian.com












