Artificial intelligence tools employed to combat child abuse imagery in home offices

The United Kingdom has become the first country to implement laws regarding the use of AI tools, as highlighted by a remarkable enforcement organization overseeing the use of this technology.

It is now illegal to possess, create, or distribute AI tools specifically designed to generate sexual abuse materials involving children, addressing a significant legal loophole that has been a major concern for law enforcement and online safety advocates. Violators can face up to five years in prison.

There is also a ban on providing manuals that instruct potential criminals on how to produce abusive images using AI tools. The distribution of such material can result in a prison sentence of up to three years for offenders.

Additionally, a new law is being introduced to prevent the sharing of abusive images and advice among criminals or on illicit websites. Border units will be granted expanded powers to compel suspected individuals to unlock and submit digital devices for inspection, particularly in cases involving sexual risks.

The use of AI tools in creating images of child sexual abuse has increased significantly, with a reported four-fold increase over the previous year. According to the Internet Watch Foundation (IWF), there were 245 instances of AI-generated child sexual abuse images in 2024, compared to just 51 the year before.

These AI tools are being utilized in various ways by perpetrators seeking to exploit children, such as modifying a real child’s image to appear nude or superimposing a child’s face onto existing abusive images. Victim voices are also incorporated into these manipulated images.

The newly generated images are often used to threaten children and coerce them into more abusive situations, including live-streamed abuse. These AI tools also serve to conceal perpetrators’ identities, groom victims, and facilitate further abuse.

Secretary of Technology, Peter Kyle, expressed concerns that the UK must stay ahead of the AI Revolution. Photo: Wiktor Szymanowicz/Future Publishing/Getty Images

Senior police officials have noted that individuals viewing such AI-generated images are more likely to engage in direct abuse of children, raising fears that the normalization of child sexual abuse may be accelerated by the use of these images.

A new law, part of upcoming crime and policing legislation, is being proposed to address these concerns.

Technology Secretary Peter Kyle emphasized that the country cannot afford to lag behind in addressing the potential misuse of AI technology.

He stated in an Observer article that while the UK aims to be a global leader in AI, the safety of children must take precedence.

Skip past newsletter promotions

Concerns have been raised about the impact of AI-generated content, with calls for stronger regulations to prevent the creation and distribution of harmful images.


Experts are urging for enhanced measures to tackle the misuse of AI technology, while acknowledging its potential benefits. Deleclehill, the CEO of IWF, highlighted the need for balancing innovation with safeguarding against abuse.

Rani Govender, a policy manager at NSPCC’s Child Safety Online, emphasized the importance of preventing the creation of harmful AI-generated images to protect children from exploitation.

In order to achieve this goal, stringent regulations and thorough risk assessments by tech companies are essential to ensure children’s safety and prevent the proliferation of abusive content.

In the UK, NSPCC offers support for children at 0800 1111, with concerns for children available at 0808 800 5000. Adult survivors can seek assistance from Napac at 0808 801 0331. In the United States, contact Childhelp at 800-422-4453 for abuse hotline services. For support in Australia, children, parents, and teachers can reach out to Kids Helpline at 1800 55 1800, or contact Bravehearts at 1800 272 831 for adult survivors. Additional resources can be found through Blue Knot Foundation at 1300 657 380 or through the Child Helpline International network.

Source: www.theguardian.com

Astronomy techniques employed by scientists to uncover deepfakes

According to a team of astronomers from the University of Hull, spotting a deepfake is as simple as looking for stars in the eyes. They propose that AI-generated fakes can be identified by examining human eyes in a similar manner to studying photos of galaxies. This means that if the reflections in a person’s eye match, then the image is likely of a real human. If not, it is likely a deepfake.



In this image, the person on the left (Scarlett Johansson) is real and the one on the right is generated by AI. Below their faces are painted eyeballs. The reflections in the eyeballs match in the real person but are inaccurate (from a physical standpoint) in the fake one. Image credit: Adejumoke Owolabi / CC BY 4.0.

“The eye reflections match up for real people but are incorrect (from a physics standpoint) for fake people,” said Prof Kevin Pimblett, from the University of Hull.

Professor Pimblett and his colleagues analysed the light reflections of the human eye in real and AI-generated images.

They then quantified the reflections using a method commonly used in astronomy to check for consistency between the reflections in the left and right eyes.

In fake images, the reflections in both eyes are often inconsistent, while in real images the reflections in both eyes are usually the same.

“To measure the shape of a galaxy we analyse whether it has a compact centre, whether it has symmetry and how smooth it is – we analyse the distribution of light,” Professor Pimblett said.

“We automatically detect the reflections and run their morphological features through CAS (density, asymmetry, smoothness) Gini Coefficient. This is to compare the similarities between the left and right eyeballs.”

“Our findings suggest that there are some differences between the two types of deepfakes.”

The Gini coefficient is typically used to measure how light in an image of a galaxy is distributed from pixel to pixel.

This measurement is done by ordering the pixels that make up an image of a galaxy in order of increasing flux, and comparing the result with what would be expected from a perfectly uniform flux distribution.

A Gini value of 0 is a galaxy whose light is evenly distributed across all pixels in the image, and a Gini value of 1 is a galaxy whose light is all concentrated in one pixel.

The astronomers also tested the CAS parameter, a tool originally developed by astronomers to measure the distribution of a galaxy’s light to determine its morphology, but found it to be useless for predicting false eyes.

“It’s important to note that this is not a silver bullet for detecting fake images,” Professor Pimblett said.

“There are false positives and false negatives, and it doesn’t detect everything.”

“But this method provides a foundation, a plan of attack, in the arms race to detect deepfakes.”

The researchers Their Work July 15 Royal Astronomical Society National Astronomy Meeting 2024 (NAM 2024) At the University of Hull.

_____

Kevin Pimblett othersDetecting deepfakes using astronomy techniques. 2024

Source: www.sci.news