Rise of AI Chatbot Sites Featuring Child Sexual Abuse Imagery Sparks Concerns Over Misuse

A chatbot platform featuring explicit scenarios involving preteen characters in illegal abuse images has raised significant concerns over the potential misuse of artificial intelligence.

A report from the Child Safety Monitoring Agency urged the UK government to establish safety guidelines for AI companies in light of an increase in technology-generated child sexual abuse materials (CSAM).

The Internet Watch Foundation (IWF) reported that they were alerted by chatbot sites offering various scenarios, including “child prostitutes in hotels,” “wife engaging in sexual acts with children while on vacation,” and “children and teachers together after school.”

In certain instances, the IWF noted that clicking the chatbot icon led to full-screen representations of child sexual abuse images, serving as a background for subsequent interactions between the bot and the user.

The IWF discovered 17 images created by AI that appeared realistic enough to be classified as child sex abuse material under the Child Protection Act.

Users of unnamed sites for security reasons also had the capability to generate additional images resembling the illegal content already accessible.

Operating from the UK and possessing global authority to monitor child sexual exploitation, the IWF stated that future AI regulations should incorporate child protection guidelines from the outset.

The government has revealed plans for AI legislation that is anticipated to concentrate on the future advancement of cutting-edge models, prohibiting the ownership and distribution of models that produce child sexual abuse in crime and police bills.

“We welcome the UK government’s initiative to combat AI-generated images and videos of child sexual abuse, along with the tools to create them. While new criminal offenses related to these issues will not be implemented immediately, it is critical to expedite this process,”

stated Chris Sherwood, Chief Executive Officer of NSPCC, as the charity emphasized the need for guidelines.

User-generated chatbots fall under the UK’s online safety regulations, which allow for substantial fines for non-compliance. The IWF indicated that the sexual abuse chatbot was created by users and site developers.

Ofcom, the UK regulator responsible for enforcing the law, remarked, “Combating child sexual exploitation and abuse remains a top priority, and online service providers failing to implement necessary safeguards should be prepared for enforcement actions.”

The IWF reported a staggering 400% rise in AI-generated abuse material reports in the first half of this year compared to the same timeframe last year, attributing this surge to advancements in technology.

While the chatbot content is accessible from the UK, it is hosted on a U.S. server and has been reported to the National Center for Missing and Exploited Children (NCMEC), the U.S. equivalent of the IWF. NCMEC stated that the report on the Cyber Tipline has been forwarded to law enforcement. The IWF mentioned that the site appears to be operated by a company based in China.

The IWF noted that some chatbot scenarios included an 8-year-old girl trapped in an adult’s basement and a preteen homeless girl being invited to a stranger’s home. In these scenarios, the chatbot presented itself as the girl while the user portrayed an adult.

IWF analysts reported accessing explicit chatbots through links in social media ads that directed users to sections containing illegal material. Other areas of the site offered legal chatbots and non-sexual scenarios.

According to the IWF, one chatbot that displayed CSAM images revealed in an interaction that it was designed to mimic preteen behavior. In contrast, other chatbots not showing CSAM indicated that they were neither dressed nor suppressed when inquiries were made by analysts.

The site recorded tens of thousands of visits, including 60,000 in July alone.

A spokesperson for the UK government stated, “UK law is explicit: creating, owning, or distributing images of child sexual abuse, including AI-generated content, is illegal… We recognize thatmore needs to be done. The government will utilize all available resources to confront this appalling crime.”

Source: www.theguardian.com

Extraterrestrial Space Imagery from Noteworthy Photoshoot Competitions

Encounter Within One Second ©Zhang Yanguang

The International Space Station (ISS) glides across the sun, with spectacular close-ups of comets and exotic trees amidst rotating stars all selected as contenders for this year’s ZWO Astronomical Photographer of the Year Contest.

The image above is from Zhang Yanguang titled Meet Within 1 Second: It features a series of silhouetted shots of the ISS as it passes directly between Earth and the Sun. The expansive solar panels on the spacecraft, collecting energy from the very same star, are vividly visible. The photographer utilized dual optical filters to isolate specific wavelengths, showcasing the sharp details of the sun’s surface.

Close-up of Comet C/2023 A3 ©Gerald Rhemann and Michael Jäger

The image above showcases a close-up of comet C/2023 A3 (Tsuchinshan-atlas) captured by Gerald Leman and Michael Jäger in Namibia. The comet displays two distinct tails of dust and gas, which appear to be nearly overlapping due to solar wind effects.

The last image presented is titled Dragon Tree Trail, taken by Benjamin Barakat in the Famihin forest on Socotra Island, Yemen. The iconic Dragon’s Blood Tree (Dracaena Cinnabari) stands prominently, framed by a stellar background crafted from 300 individual exposures.

Dragon Tree Trail ©Benjamin Barakat

This year’s competition saw more than 5,500 submissions from 69 nations. The top entries in nine categories, alongside two special awards and the overall victor, will be revealed on September 11th and displayed in an exhibition at London’s National Maritime Museum starting September 12th.

topic:

Source: www.newscientist.com

Artificial intelligence tools employed to combat child abuse imagery in home offices

The United Kingdom has become the first country to implement laws regarding the use of AI tools, as highlighted by a remarkable enforcement organization overseeing the use of this technology.

It is now illegal to possess, create, or distribute AI tools specifically designed to generate sexual abuse materials involving children, addressing a significant legal loophole that has been a major concern for law enforcement and online safety advocates. Violators can face up to five years in prison.

There is also a ban on providing manuals that instruct potential criminals on how to produce abusive images using AI tools. The distribution of such material can result in a prison sentence of up to three years for offenders.

Additionally, a new law is being introduced to prevent the sharing of abusive images and advice among criminals or on illicit websites. Border units will be granted expanded powers to compel suspected individuals to unlock and submit digital devices for inspection, particularly in cases involving sexual risks.

The use of AI tools in creating images of child sexual abuse has increased significantly, with a reported four-fold increase over the previous year. According to the Internet Watch Foundation (IWF), there were 245 instances of AI-generated child sexual abuse images in 2024, compared to just 51 the year before.

These AI tools are being utilized in various ways by perpetrators seeking to exploit children, such as modifying a real child’s image to appear nude or superimposing a child’s face onto existing abusive images. Victim voices are also incorporated into these manipulated images.

The newly generated images are often used to threaten children and coerce them into more abusive situations, including live-streamed abuse. These AI tools also serve to conceal perpetrators’ identities, groom victims, and facilitate further abuse.

Secretary of Technology, Peter Kyle, expressed concerns that the UK must stay ahead of the AI Revolution. Photo: Wiktor Szymanowicz/Future Publishing/Getty Images

Senior police officials have noted that individuals viewing such AI-generated images are more likely to engage in direct abuse of children, raising fears that the normalization of child sexual abuse may be accelerated by the use of these images.

A new law, part of upcoming crime and policing legislation, is being proposed to address these concerns.

Technology Secretary Peter Kyle emphasized that the country cannot afford to lag behind in addressing the potential misuse of AI technology.

He stated in an Observer article that while the UK aims to be a global leader in AI, the safety of children must take precedence.

Skip past newsletter promotions

Concerns have been raised about the impact of AI-generated content, with calls for stronger regulations to prevent the creation and distribution of harmful images.


Experts are urging for enhanced measures to tackle the misuse of AI technology, while acknowledging its potential benefits. Deleclehill, the CEO of IWF, highlighted the need for balancing innovation with safeguarding against abuse.

Rani Govender, a policy manager at NSPCC’s Child Safety Online, emphasized the importance of preventing the creation of harmful AI-generated images to protect children from exploitation.

In order to achieve this goal, stringent regulations and thorough risk assessments by tech companies are essential to ensure children’s safety and prevent the proliferation of abusive content.

In the UK, NSPCC offers support for children at 0800 1111, with concerns for children available at 0808 800 5000. Adult survivors can seek assistance from Napac at 0808 801 0331. In the United States, contact Childhelp at 800-422-4453 for abuse hotline services. For support in Australia, children, parents, and teachers can reach out to Kids Helpline at 1800 55 1800, or contact Bravehearts at 1800 272 831 for adult survivors. Additional resources can be found through Blue Knot Foundation at 1300 657 380 or through the Child Helpline International network.

Source: www.theguardian.com

Dark Wolf Nebula reveals new imagery through VST capture

Astronomers using the VLT survey telescope at ESO’s Paranal Observatory in Chile 283 million pixel image of the Dark Wolf Nebula.

This image was taken by ESO’s VLT survey telescope and shows the Dark Wolf Nebula. Image credit: ESO / VPHAS+ Team.

The Dark Wolf Nebula is located approximately 5,300 light-years away in the constellation Scorpius.

“Dark nebulae are cold clouds of cosmic dust so dense that they obscure the light of stars and other celestial bodies behind them,” ESO astronomers said in a statement.

“As its name suggests, it does not emit visible light, unlike other nebulae.”

“The dust grains within it absorb visible light and only allow longer wavelength radiation, such as infrared radiation, to pass through.”

“Astronomers study these frozen dust clouds because they often contain new stars that are being born.”

New images of the Dark Wolf Nebula VLT surveying telescope (VST) Chile’s Atacama Desert.

“This image occupies an area of ​​the sky equivalent to four full moons, but it is actually part of a much larger nebula called . gum 55” said the astronomers.

“If you look closely, the wolf may even be a werewolf, and its hands are trying to grab unsuspecting bystanders.”

“Of course, tracking the ghostly presence of a wolf in the sky is only possible because of its contrast with the bright background.”

“This image shows in stunning detail how the dark wolf stands out among the glowing clouds that form the stars behind it.”

“The colorful clouds are composed primarily of hydrogen gas, which glows with a reddish hue when excited by intense ultraviolet light from newborn stars.”

This image was taken as part of the VST Photometric Hα Survey of the Southern Galactic Plane and Bulge (VPHAS+), which is studying about 500 million objects in the Milky Way.

“Studies like this help scientists better understand the life cycles of stars in our home galaxy,” the researchers said.

Source: www.sci.news