Tech Firms Collaborate with UK Child Safety Agency to Evaluate AI Tool for Generating Abuse Images

Under a new UK law, tech companies and child protection agencies will be granted the authority to test if artificial intelligence tools can create images of child abuse.

This announcement follows reports from a safety watchdog highlighting instances of child sexual abuse generated by AI. The number of cases surged from 199 in 2024 to 426 in 2025.

With these changes, the government will empower selected AI firms and child safety organizations to analyze AI models, including the tech behind chatbots like ChatGPT and image-generating devices such as Google’s Veo 3, to ensure measures are in place to prevent the creation of child sexual abuse images.

Kanishka Narayan, the Minister of State for AI and Online Safety, emphasized that this initiative is “ultimately to deter abuse before it happens,” stating, “Experts can now identify risks in AI models sooner, under stringent conditions.”

This alteration was made due to the illegality of creating and possessing CSAM. Consequently, AI developers and others will be prevented from producing such images during testing. Previously, authorities could only respond after AI-generated CSAM was uploaded online, but this law seeks to eliminate that issue by stopping the images from being generated at all.

The amendments are part of the Crime and Policing Bill, which also establishes a prohibition on the possession, creation, and distribution of AI models intended to generate child sexual abuse material.

During a recent visit to Childline’s London headquarters, Narayan listened to a simulated call featuring an AI-generated report of abuse, depicting a teenager seeking assistance after being blackmailed with a sexual deepfake of herself created with AI.

“Hearing about children receiving online threats provokes intense anger in me, and parents feel justified in their outrage,” he remarked.

The Internet Watch Foundation, which oversees CSAM online, reported that incidents of AI-generated abusive content have more than doubled this year. Reports of Category A material, the most severe type of abuse, increased from 2,621 images or videos to 3,086.

Girls are predominantly targeted, making up 94% of illegal AI images by 2025, with the portrayal of newborns to two-year-olds rising significantly from five in 2024 to 92 in 2025.

Kelly Smith, CEO of the Internet Watch Foundation, stated that these legal modifications could be “a crucial step in ensuring the safety of AI products before their launch.”

“AI tools enable survivors to be victimized again with just a few clicks, allowing criminals to create an unlimited supply of sophisticated, photorealistic child sexual abuse material,” she noted. “Such material commodifies the suffering of victims and increases risks for children, particularly girls, both online and offline.”

Childline also revealed insights from counseling sessions where AI was referenced. The concerns discussed included using AI to evaluate weight, body image, and appearance; chatbots discouraging children from confiding in safe adults about abuse; online harassment with AI-generated content; and blackmail involving AI-created images.

From April to September this year, Childline reported 367 counseling sessions where AI, chatbots, and related topics were mentioned, a fourfold increase compared to the same period last year. Half of these references in the 2025 sessions pertained to mental health and wellness, including the use of chatbots for support and AI therapy applications.

Source: www.theguardian.com

Rise of AI Chatbot Sites Featuring Child Sexual Abuse Imagery Sparks Concerns Over Misuse

A chatbot platform featuring explicit scenarios involving preteen characters in illegal abuse images has raised significant concerns over the potential misuse of artificial intelligence.

A report from the Child Safety Monitoring Agency urged the UK government to establish safety guidelines for AI companies in light of an increase in technology-generated child sexual abuse materials (CSAM).

The Internet Watch Foundation (IWF) reported that they were alerted by chatbot sites offering various scenarios, including “child prostitutes in hotels,” “wife engaging in sexual acts with children while on vacation,” and “children and teachers together after school.”

In certain instances, the IWF noted that clicking the chatbot icon led to full-screen representations of child sexual abuse images, serving as a background for subsequent interactions between the bot and the user.

The IWF discovered 17 images created by AI that appeared realistic enough to be classified as child sex abuse material under the Child Protection Act.

Users of unnamed sites for security reasons also had the capability to generate additional images resembling the illegal content already accessible.

Operating from the UK and possessing global authority to monitor child sexual exploitation, the IWF stated that future AI regulations should incorporate child protection guidelines from the outset.

The government has revealed plans for AI legislation that is anticipated to concentrate on the future advancement of cutting-edge models, prohibiting the ownership and distribution of models that produce child sexual abuse in crime and police bills.

“We welcome the UK government’s initiative to combat AI-generated images and videos of child sexual abuse, along with the tools to create them. While new criminal offenses related to these issues will not be implemented immediately, it is critical to expedite this process,”

stated Chris Sherwood, Chief Executive Officer of NSPCC, as the charity emphasized the need for guidelines.

User-generated chatbots fall under the UK’s online safety regulations, which allow for substantial fines for non-compliance. The IWF indicated that the sexual abuse chatbot was created by users and site developers.

Ofcom, the UK regulator responsible for enforcing the law, remarked, “Combating child sexual exploitation and abuse remains a top priority, and online service providers failing to implement necessary safeguards should be prepared for enforcement actions.”

The IWF reported a staggering 400% rise in AI-generated abuse material reports in the first half of this year compared to the same timeframe last year, attributing this surge to advancements in technology.

While the chatbot content is accessible from the UK, it is hosted on a U.S. server and has been reported to the National Center for Missing and Exploited Children (NCMEC), the U.S. equivalent of the IWF. NCMEC stated that the report on the Cyber Tipline has been forwarded to law enforcement. The IWF mentioned that the site appears to be operated by a company based in China.

The IWF noted that some chatbot scenarios included an 8-year-old girl trapped in an adult’s basement and a preteen homeless girl being invited to a stranger’s home. In these scenarios, the chatbot presented itself as the girl while the user portrayed an adult.

IWF analysts reported accessing explicit chatbots through links in social media ads that directed users to sections containing illegal material. Other areas of the site offered legal chatbots and non-sexual scenarios.

According to the IWF, one chatbot that displayed CSAM images revealed in an interaction that it was designed to mimic preteen behavior. In contrast, other chatbots not showing CSAM indicated that they were neither dressed nor suppressed when inquiries were made by analysts.

The site recorded tens of thousands of visits, including 60,000 in July alone.

A spokesperson for the UK government stated, “UK law is explicit: creating, owning, or distributing images of child sexual abuse, including AI-generated content, is illegal… We recognize thatmore needs to be done. The government will utilize all available resources to confront this appalling crime.”

Source: www.theguardian.com

ADHD Medications Lower the Risk of Crimes, Substance Abuse, and Accidents

SEI 261892997

ADHD symptoms can be effectively managed through medication and therapy

Alex Di Stasi/Shutterstock

A study involving 150,000 participants in Sweden found that individuals using medications to control their symptoms face a diminished risk of suicidal behavior, criminal charges, substance misuse, accidental injuries, and traffic incidents. Prior research supports this, yet the team behind this latest study claims it’s the most substantial evidence available to date.

“This represents the best methodology, akin to a randomized trial,” states Zheng Chang from the Karolinska Institute in Sweden.

When considering medications for ADHD management, the wider impact of avoiding these treatments might not be fully recognized, according to Samuelle Cortese from the University of Southampton, UK. He suggests parents often become preoccupied with immediate academic challenges but should also consider potential long-term outcomes.

“Neglecting ADHD can be risky,” he emphasizes. “Current evidence indicates that treatment lowers these risks.”

Individuals with ADHD frequently struggle with attention and exhibit impulsivity. Randomized controlled trials indicate that medications are effective in handling immediate symptoms.

Such trials involve randomly assigning individuals to either receive treatment or not, regarded as the gold standard in medical research. However, no randomized studies have yet evaluated the broader effects of ADHD medications, forcing researchers to rely on observational studies, which do not definitively prove that medication leads to noted behavioral changes.

Recently, Chang, Cortese, and their team executed a method known as target trial emulation. They utilized Swedish medical and legal records to compare patients who began ADHD medication promptly after diagnosis with those who delayed.

The results indicated that those using ADHD medications were 25% less likely to face criminal charges or experience substance problems. They also recorded a 16% reduction in traffic accident involvement, a 15% lower risk for suicide attempts, and a 4% decrease in accidental injuries.

“Understanding if medication can influence daily life beyond mere symptom alleviation is invaluable,” stated Adam Guastella during an interview with the UK Science Media Centre at the University of Sydney, Australia. “This knowledge will also assist governments and policymakers in recognizing the potential societal benefits of comprehensive care, including mental health and criminal justice outcomes.”

If you need someone to talk to, please reach out: UK Samaritans: 116123 (samaritans.org); US 988 Suicide & Crisis Lifeline: 988 (988lifeline.org). Find more helplines at bit.ly/suicidehelplines for other regions.

Topic:

Source: www.newscientist.com

China’s Cyber Abuse Scandal: Is the Government Taking Action Against Online Exploitation of Women?

wHeng Min* discovered a concealed camera in her bedroom, initially hoping for a benign explanation, suspecting her boyfriend might have set it up to capture memories of their “happy life” together. However, that hope quickly morphed into fear as she realized her boyfriend had been secretly taking sexually exploitative photos of her and her female friends, as well as other women in various locations. They even used AI technology to create pornographic images of them.

When Ming confronted him, he begged for forgiveness but became angered when she refused to reconcile. I said to a Chinese news outlet, Jimu News.

Ming is not alone; many women in China have fallen victim to voyeuristic filming in both private and public spaces, including restrooms. Such images are often shared or sold online without consent. Sexually explicit photos, frequently captured via pinhole cameras hidden in everyday objects, are disseminated in large online groups.

This scandal has stirred unrest in China, raising concerns about the government’s capability and willingness to address such misconduct.


A notable group on Telegram, an encrypted messaging app, is the “Maskpark Tree Hole Forum,” which reportedly boasted over 100,000 members, mostly male.

“The Mask Park incident highlights the extreme vulnerability of Chinese women in the digital realm,” stated Li Maizi, a prominent Chinese feminist based in New York, to the Guardian.

“What’s more disturbing is the frequency of perpetrators who are known to their victims: committing sexual violence against partners, boyfriends, and even minors.”

The scandal ignited outrage on Chinese social media, stirring discussions about the difficulties of combating online harassment in the nation. While Chinese regulators are equipped to impose stricter measures against online sexual harassment and abuse, their current focus appears to prioritize suppressing politically sensitive information, according to Eric Liu, a former content moderator for Chinese social media platforms and present editor of the Digital Times based in the US.

Since the scandal emerged, Li has observed “widespread” censorship concerning the Mask Park incident on Chinese internet. Posts with potential social impact, especially those related to feminism, are frequently subject to censorship.

“If the Chinese government had the will, they could undoubtedly shut down the group,” Li noted. “The scale of [MaskPark] is significant. Cases of this magnitude have not gone unchecked in recent years.”

Nevertheless, Li expressed that he is not surprised. “Such content has always existed on the Chinese internet.”

In China, individuals found guilty of disseminating pornographic material can face up to two years in prison, while those who capture images without consent may be detained for up to ten days and fined. The country also has laws designed to protect against sexual harassment, domestic violence, and cyberbullying.

However, advocates argue that the existing legal framework falls short. Victims often find themselves needing to gather evidence to substantiate their claims, as explained by Xirui*, a Beijing-based lawyer specializing in gender-based violence cases.

“Certain elements must be met for an action to be classified as a crime, such as a specific number of clicks and subjective intent,” Xirui elaborated.

“Additionally, there’s a limitation on public safety lawsuits where the statute of limitations is only six months, after which the police typically will not pursue the case.”

Skip past newsletter promotions

The Guardian contacted China’s Foreign Ministry for a statement.


Beyond legal constraints, victims of sexual offenses often grapple with shame, which hinders many from coming forward.

“There have been similar cases where landlords set up cameras to spy on female tenants. Typically, these situations are treated as privacy violations, which may lead to controlled detention, while victims seek civil compensation,” explained Xirui.

To address these issues, the government could strengthen specialized laws, enhance gender-based training for law enforcement personnel, and encourage courts to provide guidance with examples of pertinent cases, as recommended by legal experts.

For Li, the recent occurrences reflect a pervasive tolerance for and lack of effective law enforcement regarding these issues in China. Instead of prioritizing the fight against sexist and abusive content online, authorities seem more focused on detaining female writers involved in homoerotic fiction and censoring victims of digital abuse.

“The rise of deepfake technology and the swift online distribution of poorly filmed content have rendered women’s bodies digitally accessible on an unparalleled scale,” stated Li. “However, if authorities truly wish to address these crimes, it is entirely feasible to track and prosecute them, provided they invest the necessary resources and hold the Chinese government accountable.”

*Name changed

Additional research by Lillian Yang and Jason Tang Lu

Source: www.theguardian.com

Musk’s X Faces Negligence Claims Over Child Abuse Images

On Friday, a federal appeals court reinstated some lawsuits against Elon Musk’s X, alleging that the platform has become a haven for child exploitation. However, the court affirmed that X is largely protected from liability for harmful content.

While rejecting multiple claims, the 9th Circuit Court of Appeals in San Francisco mandated that X (formerly Twitter) must promptly report a video featuring explicit images of two minor boys, asserting that it was negligent for not reporting it immediately to the National Center for Missing and Exploited Children (NCMEC).

This incident occurred prior to Musk’s acquisition of Twitter in 2022. A judge dismissed the case in December 2023, and X’s legal counsel has yet to provide a comment. Musk was not named as a defendant.

One plaintiff, John Do 1, recounted that at the age of 13, he and his friend, John Do 2, were lured on Snapchat into sharing nude photos, believing they were communicating with a 16-year-old girl.

In reality, Snapchat users were trafficking in child exploitation images, threatening the plaintiff, and soliciting more photos from him. These images were ultimately compiled into a video that was disseminated on Twitter.

Court documents revealed that Twitter took nine days to report the content to NCMEC after becoming aware of it, during which time the video amassed over 167,000 views.

Circuit Judge Daniel Forest stated that Section 230 of the Communications Decency Act, which typically shields online platforms from liability for user-generated content, does not protect X from negligence claims once it became aware of the images.

“The facts presented here, along with the statutory ‘actual knowledge’ requirement, establish that the responsibility to report child pornography is distinct from its role as a publisher to NCMEC,” she wrote on behalf of the three-judge panel.

X should further argue that its infrastructure posed challenges in reporting child abuse images.

It claimed immunity from allegations of intentionally facilitating sex trafficking and developed a search function that “amplifies” images of child exploitation.

Dani Pinter, representing the plaintiffs and speaking for the National Center on Sexual Exploitation, provided a statement:

Source: www.theguardian.com

Metamoderator Opens Up About Breakdown Following Exposure to Beheading and Child Abuse: ‘I Couldn’t Eat or Sleep’

When Solomon* entered the gleaming Octagon Tower in Accra, Ghana, he was embarking on his journey as a meta content moderator. Tasked with removing harmful content from social media, he faced a challenging yet rewarding role.

However, just two weeks into his training, he encountered a much darker side of the job than he had anticipated.

“I initially didn’t encounter graphic content, but eventually, it escalated to images of beheadings, child abuse, bestiality, and more. The first time I saw that content, I was completely taken aback.”




Octagon Building in Accra. Photo: foxglove

“Eventually, I became desensitized and began to normalize what I was seeing. It was disturbing to find myself watching beheadings and child abuse.”

“I’ll never forget that day,” Solomon recounted, having arrived from East Africa in late 2023. “The system doesn’t allow you to skip. You must view it for a minimum of 15 seconds.”

In one particular video, a woman from his homeland cried for help as several assailants attacked her.

He noted that this exposure was increasingly unsettling. One day there were no graphic videos, but as a trend emerged, suddenly around 70-80% of the content became graphic. He gradually felt “disconnected from humanity.”

In the evenings, he returned to shared accommodations provided by his employer, the outsourcing firm Telepelforming, where he faced issues related to privacy, water, and electricity.

When Solomon learned of his childhood friend’s death, it shattered his already fragile mental state. He was Broken, feeling trapped in his thoughts, and turned to Telepelforming for a temporary escape until he could regain his composure.

Isolating himself for two weeks, he admitted, “I began to spiral into depression. I stopped eating and sleeping, smoking day in and day out. I was never this way before.”

Solomon tried to take his own life and was hospitalized, where he was diagnosed with major depressive disorder and suicidal ideation. He was discharged eight days later, towards the end of 2024.

Telepelforming offered him a lower-paying position, but he feared it would not suffice to live in Accra. He sought compensation for his distress and long-term psychological care, but instead, Telepelforming sent him back to his hometown amid unrest.

“I feel used and discarded. They treated me like a disposable water bottle,” Solomon expressed after his termination.

He reflected on his past professional life in his home country, saying, “I was content and at peace before coming here.”

Another moderator, Abel*, defended Solomon and shared how he ended his contract in solidarity with fellow employees.

He confronted Telepelforming: “You’re not treating him fairly.”

“They isolated him at home. He felt unsafe being alone, which caused him severe stress, prompting him to return to work.”

Abel also faced mental health struggles stemming from the content. “I was unaware of the nature of the job and the reality of viewing explicit material for work… The first time I encountered blood, I was left numbed.”

He mentioned that colleagues often gathered to sip coffee and discuss disturbing material, even sharing their discomfort.

He hesitated to discuss these issues with wellbeing coaches due to a fear of how his concerns would be perceived by his team leader. He faced challenges when he declined to utilize a wellness service he believed was merely for “research purposes.”

A spokesman for Telepelforming stated: Recognizing his depression following his friend’s death, we conducted a psychological evaluation and found he was unfit to continue in a moderation role.

“We offered a different non-moderating position, which he declined, expressing a desire to remain in his current role. With that not being a viable option, his employment ended, and he was provided compensation per our contractual agreement.

“Throughout his tenure and afterward, we ensured ongoing psychological support. He consistently declined assistance. At the suggestion of his family, help was arranged for him, and upon medical approval, arrangements for a flight to Ethiopia were made.

“We have maintained support for him in Ethiopia, but he has avoided it, instead attempting to pressure Telepelforming for monetary compensation under the threat of public exposure.”

*The name has been changed to protect their identity

Source: www.theguardian.com

Concerns rise over potential Trump administration use of Israeli spyware amid abuse allegations

WhatsApp recently won a legal battle against NSO Group, an Israeli cyberwareponds manufacturer. Despite this victory, a new threat has emerged from another company, Paragon Solutions, which is also based in Israel, including the United States.

In January, WhatsApp revealed that 90 users, including journalists and civil society members, were targeted by SPYware created by Paragon Solutions last year. This raises concerns about how Paragon’s government clients utilize hacking tools.

Among the targeted individuals were Italian journalist Francesco Cancerato, immigrant support NGO founder Luca Casarini, and Libyan activist Husam El Gomati. University of Toronto researchers, who work closely with WhatsApp, plan to release a technical report on the breach.

Paragon, like NSO Group, provides spyware to government agencies. The spyware, known as Graphite, allows for hacking without the user’s knowledge, granting access to photos and encrypted messages. Paragon claims its use aligns with US policies for national security missions.

Paragon stated a zero-tolerance policy for violations and terminated contracts with Italy after breaching terms. David Kay, a former special rapporteur, described the marketing of such surveillance products as an abuse and a threat to the rule of law.

The issue has relevance in the US, where the Biden administration blacklisted NSO in 2021 due to reports of abuse. A contract between ICE and Paragon was suspended after concerns were raised about spyware use.

Paragon assures compliance with US laws and regulations, following the Biden executive order. The company, now US-owned, has a subsidiary in Virginia. Concerns remain about potential misuse against political opponents.

Experts from Citizen Lab and Amnestytech are vigilant in detecting illegal surveillance in democracies worldwide.

Source: www.theguardian.com

Artificial intelligence tools employed to combat child abuse imagery in home offices

The United Kingdom has become the first country to implement laws regarding the use of AI tools, as highlighted by a remarkable enforcement organization overseeing the use of this technology.

It is now illegal to possess, create, or distribute AI tools specifically designed to generate sexual abuse materials involving children, addressing a significant legal loophole that has been a major concern for law enforcement and online safety advocates. Violators can face up to five years in prison.

There is also a ban on providing manuals that instruct potential criminals on how to produce abusive images using AI tools. The distribution of such material can result in a prison sentence of up to three years for offenders.

Additionally, a new law is being introduced to prevent the sharing of abusive images and advice among criminals or on illicit websites. Border units will be granted expanded powers to compel suspected individuals to unlock and submit digital devices for inspection, particularly in cases involving sexual risks.

The use of AI tools in creating images of child sexual abuse has increased significantly, with a reported four-fold increase over the previous year. According to the Internet Watch Foundation (IWF), there were 245 instances of AI-generated child sexual abuse images in 2024, compared to just 51 the year before.

These AI tools are being utilized in various ways by perpetrators seeking to exploit children, such as modifying a real child’s image to appear nude or superimposing a child’s face onto existing abusive images. Victim voices are also incorporated into these manipulated images.

The newly generated images are often used to threaten children and coerce them into more abusive situations, including live-streamed abuse. These AI tools also serve to conceal perpetrators’ identities, groom victims, and facilitate further abuse.

Secretary of Technology, Peter Kyle, expressed concerns that the UK must stay ahead of the AI Revolution. Photo: Wiktor Szymanowicz/Future Publishing/Getty Images

Senior police officials have noted that individuals viewing such AI-generated images are more likely to engage in direct abuse of children, raising fears that the normalization of child sexual abuse may be accelerated by the use of these images.

A new law, part of upcoming crime and policing legislation, is being proposed to address these concerns.

Technology Secretary Peter Kyle emphasized that the country cannot afford to lag behind in addressing the potential misuse of AI technology.

He stated in an Observer article that while the UK aims to be a global leader in AI, the safety of children must take precedence.

Skip past newsletter promotions

Concerns have been raised about the impact of AI-generated content, with calls for stronger regulations to prevent the creation and distribution of harmful images.


Experts are urging for enhanced measures to tackle the misuse of AI technology, while acknowledging its potential benefits. Deleclehill, the CEO of IWF, highlighted the need for balancing innovation with safeguarding against abuse.

Rani Govender, a policy manager at NSPCC’s Child Safety Online, emphasized the importance of preventing the creation of harmful AI-generated images to protect children from exploitation.

In order to achieve this goal, stringent regulations and thorough risk assessments by tech companies are essential to ensure children’s safety and prevent the proliferation of abusive content.

In the UK, NSPCC offers support for children at 0800 1111, with concerns for children available at 0808 800 5000. Adult survivors can seek assistance from Napac at 0808 801 0331. In the United States, contact Childhelp at 800-422-4453 for abuse hotline services. For support in Australia, children, parents, and teachers can reach out to Kids Helpline at 1800 55 1800, or contact Bravehearts at 1800 272 831 for adult survivors. Additional resources can be found through Blue Knot Foundation at 1300 657 380 or through the Child Helpline International network.

Source: www.theguardian.com

Experts Warn X’s New AI Software Enables Racist Abuse Online: It’s Only the Beginning

Experts in online abuse have warned that the increase in online racism due to fake images is just the beginning of the problems that may arise following a recent update of X Company’s AI software.

Concerns were first raised in December last year when numerous computer-generated images produced by Company X’s generative AI chatbot Grok were leaked on social media platforms.

Signify, an organization that collaborates with leading sports bodies and clubs to monitor and report instances of online hate, has noted a rise in abuse reports since the latest update of Grok, warning that this type of behavior is likely to become more widespread with the introduction of AI.

Elaborating on the issue, a spokesperson stated that the current problem is only the tip of the iceberg and is expected to worsen significantly in the next year.

Grok, introduced by Elon Musk in 2023, recently launched a new feature called Aurora, which enables users to create photorealistic AI images based on simple prompts.

Reports indicate that the latest Grok update is being misused to generate photo-realistic racist images of various soccer players and coaches, sparking widespread condemnation.

The Center for Countering Digital Hate (CCDH) expressed concerns about X’s role in promoting hate speech through revenue-sharing mechanisms, facilitated by AI-generated imagery.

The absence of stringent restrictions on user requests and the ease of circumventing AI guidelines are among the key issues highlighted, with Grok producing a significant number of hateful prompts without appropriate safeguards.

In response to the alarming trend, the Premier League has taken steps to combat racist abuse directed towards athletes, with measures in place to identify and report such incidents, potentially leading to legal action.

Both X and Grok have been approached for comment regarding the situation.

Source: www.theguardian.com

Sexual Abuse Allegations Against OpenAI CEO Sam Altman Made by Sister Lead to Lawsuit

The sister of OpenAI CEO Sam Altman has filed a lawsuit alleging that he sexually abused her on a regular basis over several years as a child.

The lawsuit, filed Jan. 6 in the U.S. District Court for the Eastern District of Missouri, alleges the abuse began when Ann Altman was 3 years old and Sam Altman was 12. The complaint alleges that the last abuse occurred after he was an adult, but his sister, known as Annie, was still a child.

The CEO of ChatGPT Developers posted: Joint statement on X”, he signed alongside his mother Connie and brothers Max and Jack, denying the allegations and calling them “totally false.”‘

“Our family loves Annie and is extremely concerned about her health,” the statement said. “Caring for family members facing mental health challenges is incredibly difficult.”

It added: “Annie has made deeply hurtful and completely untrue allegations about our family, especially Sam. This situation has caused immeasurable pain to our entire family.”

Ann Altman previously made similar allegations against her brother on social media platforms.

In a court filing, her lawyer said she had experienced mental health issues as a result of the alleged abuse. The lawsuit seeks a jury trial and more than $75,000 (£60,000) in damages and legal fees.

A statement from the family said Anne Altman had made “deeply hurtful and completely false allegations” about the family and accused them of demanding more money.

He added that they offered her “monthly financial assistance” and “attempted to receive medical assistance,” but she “refused conventional treatment.”

The family said they had previously decided not to publicly respond to the allegations, but chose to do so following her decision to take legal action.

Sam Altman, 39, is one of the most prominent leaders in technology and the co-founder of OpenAI, best known for ChatGPT, an artificial intelligence (AI) chatbot launched in 2022.

The billionaire temporarily stepped down as chief executive in November 2023 after being ousted from the company’s board for “failing to consistently communicate openly.” Although nearly all employees threatened to resign, he returned to his job the following week. Altman returned to the board last March following an external investigation.

Source: www.theguardian.com

UK police boss warns that AI is on the rise in sextortion, fraud, and child abuse cases

A senior police official has issued a warning that pedophiles, fraudsters, hackers, and criminals are now utilizing artificial intelligence (AI) to target victims in increasingly harmful ways.

According to Alex Murray, the National Police’s head of AI, criminals are taking advantage of the expanding accessibility of AI technology, necessitating swift action by law enforcement to combat these new threats.

Murray stated, “Throughout the history of policing, criminals have shown ingenuity and will leverage any available resource to commit crimes. They are now using AI to facilitate criminal activities.”

He further emphasized that AI is being used for criminal activities on both a global organized crime level and on an individual level, demonstrating the versatility of this technology in facilitating crime.

During the recent National Police Chiefs’ Council meeting in London, Mr. Murray highlighted a new AI-driven fraud scheme where deepfake technology was utilized to impersonate company executives and deceive colleagues into transferring significant sums of money.

Instances of similar fraudulent activities have been reported globally, with concern growing over the increasing sophistication of AI-enabled crimes.

The use of AI by criminals extends beyond fraud, with pedophiles using generative AI to produce illicit images and videos depicting child sexual abuse, a distressing trend that law enforcement agencies are working diligently to combat.

Additionally, hackers are employing AI to identify vulnerabilities in digital systems, providing insights for cyberattacks, highlighting the wide range of potential threats posed by the criminal use of AI technology.

Furthermore, concerns have been raised regarding the radicalization potential of AI-powered chatbots, with evidence suggesting that these bots could be used to encourage individuals to engage in criminal activities including terrorism.

As AI technologies continue to advance and become more accessible, law enforcement agencies must adapt rapidly to confront the evolving landscape of AI-enabled crimes and prevent a surge in criminal activities using AI by the year 2029.

Source: www.theguardian.com

Report: Increase in online presence of AI-generated images depicting child sexual abuse | Technology

Child sexual exploitation is increasing online, with artificial intelligence generating new forms such as images and videos related to child sexual abuse.


Reports of online child abuse to NCMEC increased by more than 12% from the previous year to over 36.2 million in 2023, as announced in the organization’s annual CyberTipline report. Most reports were related to the distribution of child sexual abuse material (CSAM), including photos and videos. Online criminals are also enticing children to send nude images and videos for financial gain, with increased reports of blackmail and extortion.

NCMEC has reported instances where children and families have been targeted for financial gain through blackmail using AI-generated CSAM.

The center has received 4,700 reports of child sexual exploitation images and videos created by generative AI, although tracking in this category only began in 2023, according to a spokesperson.

NCMEC is alarmed by the growing trend of malicious actors using artificial intelligence to produce deepfaked sexually explicit images and videos based on real children’s photos, stating that it is devastating for the victims and their families.

The group emphasizes that AI-generated child abuse content hinders the identification of actual child victims and is illegal in the United States, where production of such material is a federal crime.

In 2023, CyberTipline received over 35.9 million reports of suspected CSAM incidents, with most uploads originating outside the US. There was also a significant rise in online solicitation reports and exploitation cases involving communication with children for sexual purposes or abduction.

Top platforms for cybertips included Facebook, Instagram, WhatsApp, Google, Snapchat, TikTok, and Twitter.

Skip past newsletter promotions

Out of 1,600 global companies registered for the CyberTip Reporting Program, 245 submitted reports to NCMEC, including US-based internet service providers required by law to report CSAM incidents to CyberTipline.

NCMEC highlights the importance of quality reports, as some automated reports may not be actionable without human involvement, potentially hindering law enforcement in detecting child abuse cases.

NCMEC’s report stresses the need for continued action by Congress and the tech community to address reporting issues.

Source: www.theguardian.com

My 17-year-old son arrested for distributing child abuse images, expresses relief

Louise* thought she had been honest with her two children about the risks of the internet. However, last year, at 6 a.m., the police knocked on her door looking for her 17-year-old son.

“Five or six police officers came up my stairs,” she recalled. She exclaimed, “When they informed her they were searching for her son due to indecent images, she felt like she was going to pass out.

“I said, ‘Oh my god, he’s autistic. Has he been taught?’ They confiscated all his devices and took him away. I was so stunned that I almost vomited after they left.”

Louise’s son is just one of many under-18s accused by law enforcement of viewing or sharing indecent images of children in the past year.

the study Published in February Some individuals who consume child sexual abuse material (CSAM) admit to becoming desensitized to adult pornography and are now in search of more extreme or violent content. It appears that there are people.

In December, an investigation by The Guardian revealed that in certain areas, the majority of individuals identified by authorities as viewing or sharing indecent images of children were under 18.

Experts argue that this is part of a larger crisis caused by predators grooming children through chat apps and social media platforms.

In January, the Internet Watch Foundation cautioned that over 90% of child abuse images online are self-produced, meaning they are generated and distributed by children themselves.

Louise attributes her son’s natural teenage curiosity about pornography to steering him towards a dangerous path of interacting with strangers and sharing explicit images. Alex* was convicted of viewing and distributing a small number of child abuse images, some falling under Category A (rape and abuse of young children). Categories B and C.

While Louise acknowledges that her son, who received an 18-month community sentence and is now on the sex offenders register for five years, committed a serious offense and must face the consequences. But she also wants other parents to comprehend the sequence of events.

“It all began with an obsession common among many young people with autism,” she explained. “He adored manga and anime. I can’t even count how many miles he traveled to buy manga for himself.

“This interest led him from innocent cartoons to sexualized images, eventually leading him to join a group where teenagers exchange pornography.”

Alex has since admitted to his mother that he had an interest in pornography and was part of online groups with names like “Sex Images 13 to 17.” “What teenager isn’t curious?” Louise pondered.

It was on these popular sites and chat apps that adults were waiting to exploit vulnerable young individuals like him.

“He was bombarded with messages,” Louise shared. “Literally thousands of messages from individuals attempting to manipulate him. This boy has struggled for years to fit in as an autistic kid at school. He’s been a victim of bullying. And all of a sudden, he felt accepted. He felt a sense of excitement.

“Adults coerced him into sharing images of abuse. If he hadn’t been caught, who knows where it could have led?”

Louise questioned Alex why he didn’t show the images he received to an adult.

“I even asked him, ‘Why didn’t you tell me immediately when you saw the image?'” And he replied, “Mom, I know it’s difficult to do that. Did you know?” to describe the months I’ve been online in these spaces. ” His actual words when the police arrived were, “Oh, thank God.” That was a relief to him. ”

She mentioned that the lockdown has shifted the dynamics for young people like her son, with their lives increasingly reliant on the internet. “They were instructed, ‘Just go online and do everything there.”

Both Alex and his mother are receiving assistance from the Lucy Faithful Foundation, a charity aiding online sex offenders. Last year, 217,889 people expressed concern about their own or someone else’s sexual thoughts or actions and have reached out to seek help.

The organization recently launched a website called coast, targeting young individuals anxious about their own sexual thoughts and behaviors. Following the lifting of lockdown restrictions, calls to support hotlines for under-18s rose by 32%.

Alex also reflected on the precarious position he found himself in. “I was in my final year of sixth form, at home while my friends were heading off to university, so I felt anxious and fearful about our friendship drifting apart.

“Here, I made the fateful decision to use multiple chat platforms to try to build friendships. Although I had no intention of sexual involvement, I approached my friend in a natural sexual interest, experience. The fear of delay, combined with the powerful effect of anonymity, has made it very easy to engage in these matters.”

He cautions that his generation’s utilization of the online realm demands novel approaches to safeguard children better.

“This issue cannot be resolved by simply advising against talking to strangers on the internet. That information is outdated,” he remarked.

“Many people believe that this content can only be found on the dark web, when in fact it can be found in the shallowest parts of the internet without any effort. It was so scary that I might have thought about it, but unfortunately I was in too deep and it was too late.”

*Name has been changed

  • If you have concerns about images your child may have shared themselves, you can report them through the joint Childline and Internet Watch Foundation service. Delete report. You can also report images of child sexual abuse from the same website. If you are concerned about the sexual behavior of young people, please visit: shorespace.org.uk

Source: www.theguardian.com

Epic Games challenges Apple and Google in Australia amid claims of market power abuse

When Apple’s first iPhone was released in 2007, all of its apps were created by Apple.

According to his biography by Walter Isaacson, Steve Jobs was reluctant to allow apps from third-party developers on the iPhone. He eventually succumbed to pressure with the launch of his App Store in 2008. However, the company wanted to maintain strict control over what was allowed on the platform: email. 2021 release schedule revealed.

The case, which will be heard over the next five months in Melbourne’s Federal Court, will center on Apple’s control over its empire. At the same time, Google, which has prided itself on having a more open ecosystem than Apple, will have its practices tested.


Two cases in Australia’s Federal Court were adjourned in April 2021, pending the outcome of a similar case in the United States. Epic Games, the maker of the popular game Fortnite, has spent the past three years in a global legal battle against Apple and Google, alleging abuse of market power over their app stores.


Fortnite announced a deal with Google in 2020 after Epic Games offered its own in-app payment system that bypasses the one used by the platform and reduced the fees Apple and Google receive on in-app payments. Removed from Apple’s app store.

Epic lost a 2021 antitrust lawsuit against Apple, but won a lawsuit against Google late last year. Although the Australian cases were initially separate, they are now integrated into one monolith. Judge Jonathan Beech decided to hear the two cases and a related class action at the same time to avoid duplication of witness evidence.

David and Goliath?

In an Australian lawsuit that originally began in 2020, Epic Games argued that Apple’s control over in-app purchases and Apple’s actions in banning the Fortnite app were an abuse of market power, and that it significantly reduced competition in app development. He claimed to have lowered it. The company also claims that Google has harmed Australian app developers and consumers by preventing them from distributing apps and choosing in-app payments on Android devices.

As with mobile phone operating systems, the litigation between Apple and Google has many similarities, but there are also important differences. Apple’s iOS and App Store are completely closed and controlled by Apple. This means that if you have an app on your phone and a payment is made through that app, it has to go through Apple.

Similar rules apply to the Play Store in Google’s Android operating system, but Google also allows apps to be “sideloaded,” or installed directly onto a phone without using the app store. It also allows phone manufacturers like Samsung to have their own app stores. Fortnite is still available on Android, but only through sideloading or the Samsung Store.

Companies charge fees for transactions in their app stores. In Google Play, he charges a commission of 15% for the first million dollars a developer earns each year, and above that he increases to 30%. If an Apple developer’s revenue in the previous year was less than $1 million, he would pay a 15% fee, but if it was more than that, he would pay a 30% fee.

Fees are common in the industry, with Epic’s own store charging developers a 12% fee.

Epic argues that it should be able to offer its store as a competitor to Apple’s store, and that it should also be able to offer alternative payment options within its official game store apps.


Google claims to be more open than the Apple App Store, but it was this openness that hurt the tech company in the US lawsuit. The jury found that tying the Google Play Store to in-app payments was illegal and that the company had entered into anti-competitive agreements with some developers to keep their apps on the Play Store.

In the Apple case, the judge took a narrower view, considering mobile game transactions specifically rather than app stores as a whole. The judge found that Apple is not a monopoly and is in competition with Google and other companies. The judge also upheld Apple’s concerns about the security implications of opening the App Store and sided with the company’s pursuit of intellectual property royalties through in-app payments.

Apple is expected to file a similar lawsuit in Australia. The company believes there is little difference between the cases and that the principles underlying Australian competition law are similar to US antitrust principles.

Apple sees Epic not as David the Goliath, but as a multibillion-dollar company seeking more profits at the expense of iPhone users’ safety.

Google claims that it not only offers customers a choice in the app store, but also offers alternative options for developers to sell their content outside of Google Play. It also points to permissions that allow sideloading of apps while maintaining user security, which Epic claims it is trying to water down.

“It’s clear that Android and Google Play offer more choice and openness than other major mobile platforms, and are a good model for Australian developers and consumers,” Google’s Government Affairs statement said. Vice President for Public Policy Wilson White said in a post this week. .


“We continue to have a right to sustainable business models that keep our users safe, grow our businesses in partnership with developers, and keep the Android ecosystem thriving and all Australians healthy. We will vigorously defend it.”

Apple forced to make changes to EU App Store

Initial submissions will last two weeks, followed by three months of evidence from fact witnesses and experts, followed by two weeks of final submissions, ending in mid-July.

Witnesses expected to testify include Epic CEO Tim Sweeney, who is in Melbourne for the hearing, as well as key executives from Apple and Google.

A concurrent class action lawsuit on behalf of Australian developers and consumers will fail if Epic’s lawsuit fails.

The case is unlikely to be resolved by the end of the year, and Beach is not expected to issue a verdict within six months, after which it could be appealed.

Whether or not Epic wins the battle, Apple and Google may ultimately lose the app store war. Apple has been forced to implement changes to its App Store in the European Union, including allowing alternative payment options and marketplaces, under the Digital Markets Act. As a result, Apple last week reinstated Epic’s developer account in the EU.

Epic says Apple’s implementation of these changes is incomplete, but other governments, including Australia, may follow suit.

Source: www.theguardian.com

Lawyer Exposes: US Police Allegedly Prevented Access to Numerous Online Child Sexual Abuse Reports

The Guardian has revealed that social media companies relying on artificial intelligence software to manage their platforms are producing unworkable reports on child sexual abuse cases, leaving U.S. police unable to uncover potential leads, which is delaying the investigation into suspected looters.

By law, U.S.-based social media companies are required to report child sexual abuse content detected on their platforms to the National Center for Missing and Exploited Children (NCMEC), which serves as a national clearinghouse for child abuse information and forwards information to relevant law enforcement agencies in the United States and around the world. The company said it received more than 32 million reports of suspected child sexual exploitation and approximately 88 million images, videos, and other files from businesses and the general public in 2022.

Meta is the largest reporter of this information, with over 27 million (84%) generated by Facebook, Instagram, and WhatsApp platforms in 2022. NCMEC is partially funded by the Department of Justice and also receives private sources of corporate donations.

Social media companies, including Meta, use AI to detect and report suspicious content on their sites and employ human moderators to send some flagged content to law enforcement. However, U.S. law enforcement agencies can only disclose AI-generated child sexual abuse material (CSAM) by serving a search warrant on a company that has filed a report, which can add days or even weeks to the investigation process.

“If a company reports a file to NCMEC and does not indicate that it viewed the file before reporting, we will not be able to open the file,” said Staka Shehan, vice president of analytical services at NCMEC.

To protect your privacy under the Fourth Amendment, neither law enforcement officials nor the federally funded NCMEC will issue a search warrant unless the contents of the report are clear and first reviewed by a social media company representative.

NCMEC staff and law enforcement agencies cannot legally see the content of AI-generated content that is not seen by humans, which can stall investigations into suspected predators for several weeks, resulting in the loss of evidence that may be possible to connect.

“Any delay [in viewing the evidence] “The longer criminals go undetected, the more detrimental it is to ensuring community safety,” said an assistant U.S. attorney in California, who spoke on condition of anonymity. “They are dangerous to all children.”

In December, the New Mexico Attorney General’s Office filed a lawsuit against Meta, alleging that its social network has become a marketplace for child predators and that Meta has repeatedly failed to report illegal activity on its platform. woke up. In response, Meta said its priority was to combat child sexual abuse content.

The state attorney general laid the blame for the fight to send actionable information at the feet of Meta. “Reports showing the inefficiency of the company’s AI-generated cyber information systems prove what we said in the complaint,” Raul Torrez said in a statement to the Guardian.

To ensure the safety of children, keep parents informed, and enable law enforcement to effectively investigate and prosecute online sex crimes against children, the company is reforming, staffing levels, and policies. , it’s long past time to implement algorithmic changes,” Torrez added.

Despite legal limitations on moderation AI, social media companies are likely to increase its use in the near future. In 2023, OpenAI, developer of ChatGPT, announced they claimed that large-scale language models can do the job of human content moderators and have roughly the same accuracy.

However, child safety experts say that the AI software used by social media companies to moderate content already knows the digital fingerprints of images, known as hashes, and that the AI software used by social media companies to moderate content cannot be used to detect known cases of child sexual abuse. It claims to be effective only when identifying images of Lawyers interviewed said AI would be ineffective when newly created images or when known images or videos are altered.

“There is always concern about cases involving newly identified victims, and because they are new, the materials do not have a hash value,” said the director of the Zero Abuse Project, a nonprofit organization focused on combating child abuse.
said senior lawyer Kristina Korobov. . “If humans were doing the work, there would be more discoveries of newly discovered victims.”

In the US, please call or text us. child help Abuse Hotline 800-422-4453 or visit
their website If you need more resources, please report child abuse or DM us for help. For adult survivors of child abuse, support is available at the following link:
ascasupport.org. In the UK,
NSPCC Support for children is available on 0800 1111 and adults who are concerned about a child can call 0808 800 5000. National Association of Child Abuse (
napak) offers support to adult survivors on 0808 801 0331. In Australia, children, young people, parents and teachers can contact the Kids Helpline on 1800 55 1800.
brave hearts Adult survivors can contact 1800 272 831
blue knot foundation 1300 657 380. Additional sources of help can be found at:
Child Helpline International

Source: www.theguardian.com