The quantity of online videos depicting child sexual abuse created by artificial intelligence has surged as advancements in technology have impacted pedophiles.
According to the Internet Watch Foundation, AI-generated abuse videos have surpassed a critical level, nearing a point where they can nearly measure “actual images,” with a notable increase observed this year.
In the first half of 2025, the UK-based Internet Safety Watchdog examined 1,286 AI-generated videos containing illegal child sexual abuse material (CSAM), a sharp increase from just two during the same period last year.
The IWF reported that over 1,000 of these videos fall under Category A abuse, the most severe classification of such material.
The organization indicated that billions have been invested in AI, leading to a widely accessible video generation model that pedophiles are exploiting.
“It’s a highly competitive industry with substantial financial incentives, unfortunately giving perpetrators numerous options,” stated an IWF analyst.
This video surge is part of a 400% rise in URLs associated with AI-generated child sexual abuse content in the first half of 2025, with IWF receiving reports of 210 such URLs compared to 42 last year.
IWF discovered one post on a Dark Web Forum where a user noted the rapid improvements in AI and how pedophiles had rapidly adapted to using an AI tool to “better interact with new developments.”
IWF analysts observed that the images seem to be created by utilizing free, basic AI models and “fine-tuning” these models with CSAM to produce realistic videos. In some instances, this fine-tuning involved a limited number of CSAM videos, according to IWF.
The most lifelike AI-generated abuse videos encountered this year were based on actual victims, the Watchdog reported.
Interim CEO of IWF, Derek Ray-Hill, remarked that the rapid advancement of AI models, their broad accessibility, and their adaptability for criminal purposes could lead to a massive proliferation of AI-generated CSAM online.
“The risk of AI-generated CSAM is astonishing, leading to a potential flood that could overwhelm the clear web,” he stated, cautioning that the rise of such content might encourage criminal activities like child trafficking and modern slavery.
The replication of existing victims of sexual abuse in AI-generated images allows pedophiles to significantly increase the volume of CSAM online without having to exploit new victims, he added.
The UK government is intensifying efforts to combat AI-generated CSAM by criminalizing the ownership, creation, or distribution of AI tools designed to produce abusive content. Those found guilty under this new law may face up to five years in prison.
Additionally, it is now illegal to possess manuals that instruct potential offenders on how to use AI tools for creating abusive images or for child abuse. Offenders could face up to three years in prison.
In a February announcement, Interior Secretary Yvette Cooper stated, “It is crucial to address child sexual abuse online, not just offline.”
AI-generated CSAM is deemed illegal under the Protection Act of 1978, which criminalizes the production, distribution, and possession of “indecent or false images” of children.
Source: www.theguardian.com
