China’s Cyber Abuse Scandal: Is the Government Taking Action Against Online Exploitation of Women?

wHeng Min* discovered a concealed camera in her bedroom, initially hoping for a benign explanation, suspecting her boyfriend might have set it up to capture memories of their “happy life” together. However, that hope quickly morphed into fear as she realized her boyfriend had been secretly taking sexually exploitative photos of her and her female friends, as well as other women in various locations. They even used AI technology to create pornographic images of them.

When Ming confronted him, he begged for forgiveness but became angered when she refused to reconcile. I said to a Chinese news outlet, Jimu News.

Ming is not alone; many women in China have fallen victim to voyeuristic filming in both private and public spaces, including restrooms. Such images are often shared or sold online without consent. Sexually explicit photos, frequently captured via pinhole cameras hidden in everyday objects, are disseminated in large online groups.

This scandal has stirred unrest in China, raising concerns about the government’s capability and willingness to address such misconduct.


A notable group on Telegram, an encrypted messaging app, is the “Maskpark Tree Hole Forum,” which reportedly boasted over 100,000 members, mostly male.

“The Mask Park incident highlights the extreme vulnerability of Chinese women in the digital realm,” stated Li Maizi, a prominent Chinese feminist based in New York, to the Guardian.

“What’s more disturbing is the frequency of perpetrators who are known to their victims: committing sexual violence against partners, boyfriends, and even minors.”

The scandal ignited outrage on Chinese social media, stirring discussions about the difficulties of combating online harassment in the nation. While Chinese regulators are equipped to impose stricter measures against online sexual harassment and abuse, their current focus appears to prioritize suppressing politically sensitive information, according to Eric Liu, a former content moderator for Chinese social media platforms and present editor of the Digital Times based in the US.

Since the scandal emerged, Li has observed “widespread” censorship concerning the Mask Park incident on Chinese internet. Posts with potential social impact, especially those related to feminism, are frequently subject to censorship.

“If the Chinese government had the will, they could undoubtedly shut down the group,” Li noted. “The scale of [MaskPark] is significant. Cases of this magnitude have not gone unchecked in recent years.”

Nevertheless, Li expressed that he is not surprised. “Such content has always existed on the Chinese internet.”

In China, individuals found guilty of disseminating pornographic material can face up to two years in prison, while those who capture images without consent may be detained for up to ten days and fined. The country also has laws designed to protect against sexual harassment, domestic violence, and cyberbullying.

However, advocates argue that the existing legal framework falls short. Victims often find themselves needing to gather evidence to substantiate their claims, as explained by Xirui*, a Beijing-based lawyer specializing in gender-based violence cases.

“Certain elements must be met for an action to be classified as a crime, such as a specific number of clicks and subjective intent,” Xirui elaborated.

“Additionally, there’s a limitation on public safety lawsuits where the statute of limitations is only six months, after which the police typically will not pursue the case.”

Skip past newsletter promotions

The Guardian contacted China’s Foreign Ministry for a statement.


Beyond legal constraints, victims of sexual offenses often grapple with shame, which hinders many from coming forward.

“There have been similar cases where landlords set up cameras to spy on female tenants. Typically, these situations are treated as privacy violations, which may lead to controlled detention, while victims seek civil compensation,” explained Xirui.

To address these issues, the government could strengthen specialized laws, enhance gender-based training for law enforcement personnel, and encourage courts to provide guidance with examples of pertinent cases, as recommended by legal experts.

For Li, the recent occurrences reflect a pervasive tolerance for and lack of effective law enforcement regarding these issues in China. Instead of prioritizing the fight against sexist and abusive content online, authorities seem more focused on detaining female writers involved in homoerotic fiction and censoring victims of digital abuse.

“The rise of deepfake technology and the swift online distribution of poorly filmed content have rendered women’s bodies digitally accessible on an unparalleled scale,” stated Li. “However, if authorities truly wish to address these crimes, it is entirely feasible to track and prosecute them, provided they invest the necessary resources and hold the Chinese government accountable.”

*Name changed

Additional research by Lillian Yang and Jason Tang Lu

Source: www.theguardian.com

Surge in AI-Generated Child Exploitation Videos Online, Reports Watchdog

The quantity of online videos depicting child sexual abuse created by artificial intelligence has surged as advancements in technology have impacted pedophiles.

According to the Internet Watch Foundation, AI-generated abuse videos have surpassed a critical level, nearing a point where they can nearly measure “actual images,” with a notable increase observed this year.

In the first half of 2025, the UK-based Internet Safety Watchdog examined 1,286 AI-generated videos containing illegal child sexual abuse material (CSAM), a sharp increase from just two during the same period last year.

The IWF reported that over 1,000 of these videos fall under Category A abuse, the most severe classification of such material.

The organization indicated that billions have been invested in AI, leading to a widely accessible video generation model that pedophiles are exploiting.

“It’s a highly competitive industry with substantial financial incentives, unfortunately giving perpetrators numerous options,” stated an IWF analyst.

This video surge is part of a 400% rise in URLs associated with AI-generated child sexual abuse content in the first half of 2025, with IWF receiving reports of 210 such URLs compared to 42 last year.

IWF discovered one post on a Dark Web Forum where a user noted the rapid improvements in AI and how pedophiles had rapidly adapted to using an AI tool to “better interact with new developments.”

IWF analysts observed that the images seem to be created by utilizing free, basic AI models and “fine-tuning” these models with CSAM to produce realistic videos. In some instances, this fine-tuning involved a limited number of CSAM videos, according to IWF.

The most lifelike AI-generated abuse videos encountered this year were based on actual victims, the Watchdog reported.

Interim CEO of IWF, Derek Ray-Hill, remarked that the rapid advancement of AI models, their broad accessibility, and their adaptability for criminal purposes could lead to a massive proliferation of AI-generated CSAM online.

“The risk of AI-generated CSAM is astonishing, leading to a potential flood that could overwhelm the clear web,” he stated, cautioning that the rise of such content might encourage criminal activities like child trafficking and modern slavery.

The replication of existing victims of sexual abuse in AI-generated images allows pedophiles to significantly increase the volume of CSAM online without having to exploit new victims, he added.

The UK government is intensifying efforts to combat AI-generated CSAM by criminalizing the ownership, creation, or distribution of AI tools designed to produce abusive content. Those found guilty under this new law may face up to five years in prison.

Additionally, it is now illegal to possess manuals that instruct potential offenders on how to use AI tools for creating abusive images or for child abuse. Offenders could face up to three years in prison.

In a February announcement, Interior Secretary Yvette Cooper stated, “It is crucial to address child sexual abuse online, not just offline.”

AI-generated CSAM is deemed illegal under the Protection Act of 1978, which criminalizes the production, distribution, and possession of “indecent or false images” of children.

Source: www.theguardian.com

Utah State Lawsuit Alleges TikTok Was Aware of Child Exploitation Through Live Streaming Feature

TikTok has been aware for a long time that its video livestream feature was being misused to harm children, as revealed in a lawsuit filed by the state of Utah against the social media company. The harms include child sexual exploitation and what Utah describes as an “open door policy that allows predators and criminals to exploit users.”

The state’s attorney general stated that TikTok conducted an internal investigation in which adults allegedly used the TikTok Live feature to engage in provocative behavior with teenagers. It was found that some of them were paid for this. Another internal investigation found that criminals used TikTok Live to launder money, sell drugs, and fund terrorist groups.

Utah was the first to file a lawsuit against TikTok last June, alleging that the company was profiting from child exploitation. The lawsuit was based on internal documents obtained through subpoenas from TikTok. On Friday, an unredacted version of the lawsuit was released by the Utah Attorney General’s Office, despite TikTok’s efforts to keep the information confidential.

“Online exploitation of minors is on the rise, leading to tragic consequences such as depression, isolation, suicide, addiction, and human trafficking,” said Utah Attorney General Sean Reyes in a statement on Friday. He criticized TikTok for knowingly putting minors at risk for profit.

A spokesperson for TikTok responded to the Utah lawsuit by stating that the company has taken proactive steps to address safety concerns. The spokesperson mentioned that users must be 18 or older to use the Live feature and that TikTok provides safety tools for users.

The lawsuit against TikTok is part of a trend of U.S. attorney generals filing lawsuits over child exploitation on various apps. In December 2023, New Mexico sued Meta for similar reasons. Other states have also filed lawsuits against TikTok over similar allegations.

Following a report by Forbes in 2022, TikTok launched an internal investigation called Project Meramec to look into teens making money from TikTok Lives. The investigation found that underage users were engaging in inappropriate behavior for digital currency.

The complaint also mentions that TikTok captures a share of digital gifts from live streams, with lawmakers arguing that the algorithm encourages streams with sexual content as they are more profitable. Another internal investigation called Project Jupiter looked into organized crime using Live for money laundering purposes.

Source: www.theguardian.com

Activists advocate for public transparency of ride-hailing app data to tackle exploitation and reduce emissions | Gig Economy

Activists are urging Uber and other ride-hailing apps to disclose data on their drivers’ workload to combat exploitation and reduce carbon emissions.

Analysis by Worker Info Exchange suggests that drivers for Uber and its competitors may have missed out on over £1.2 billion in earnings and expenses last year due to payment structures.

The report argues that these platforms are built on an oversupply of vehicles and the exploitation of workers, leading to financial struggles and debt.

Uber collects anonymized trip data in several North American cities and claims this covers around 40% of drivers’ miles before picking up passengers.

Despite Uber’s response that drivers earn money on other platforms during idle times, Worker Info Exchange maintains that better compensation and expense coverage could have resulted in an additional £1.29 billion industry-wide in 2023.

The report also highlights issues with monitoring drivers’ mileage, leading to potential exhaustion and safety hazards.

Similar concerns are raised about food delivery apps, with calls for more transparency in journey data.

Efforts in New York to limit vehicle licenses to support taxi drivers and reduce congestion have been noted, although recent changes exempt electric vehicles.

Uber’s carbon emissions in the UK are projected to surpass those of Transport for London, prompting calls for stricter control and transparency from regulators.

The ongoing debate around worker classification and rights in the gig economy is also highlighted, with promises from lawmakers to address issues of “false self-employment”.

Worker Info Exchange, founded by a key figure in the Uber Supreme Court case, aims to empower gig workers by providing more control over their data and decision-making processes.

Source: www.theguardian.com