Report: Increase in online presence of AI-generated images depicting child sexual abuse | Technology

Child sexual exploitation is increasing online, with artificial intelligence generating new forms such as images and videos related to child sexual abuse.


Reports of online child abuse to NCMEC increased by more than 12% from the previous year to over 36.2 million in 2023, as announced in the organization’s annual CyberTipline report. Most reports were related to the distribution of child sexual abuse material (CSAM), including photos and videos. Online criminals are also enticing children to send nude images and videos for financial gain, with increased reports of blackmail and extortion.

NCMEC has reported instances where children and families have been targeted for financial gain through blackmail using AI-generated CSAM.

The center has received 4,700 reports of child sexual exploitation images and videos created by generative AI, although tracking in this category only began in 2023, according to a spokesperson.

NCMEC is alarmed by the growing trend of malicious actors using artificial intelligence to produce deepfaked sexually explicit images and videos based on real children’s photos, stating that it is devastating for the victims and their families.

The group emphasizes that AI-generated child abuse content hinders the identification of actual child victims and is illegal in the United States, where production of such material is a federal crime.

In 2023, CyberTipline received over 35.9 million reports of suspected CSAM incidents, with most uploads originating outside the US. There was also a significant rise in online solicitation reports and exploitation cases involving communication with children for sexual purposes or abduction.

Top platforms for cybertips included Facebook, Instagram, WhatsApp, Google, Snapchat, TikTok, and Twitter.

Skip past newsletter promotions

Out of 1,600 global companies registered for the CyberTip Reporting Program, 245 submitted reports to NCMEC, including US-based internet service providers required by law to report CSAM incidents to CyberTipline.

NCMEC highlights the importance of quality reports, as some automated reports may not be actionable without human involvement, potentially hindering law enforcement in detecting child abuse cases.

NCMEC’s report stresses the need for continued action by Congress and the tech community to address reporting issues.

Source: www.theguardian.com

Facebook Board Announces Rule Allowing Altered Video Depicting Biden as Pedophile

Meta’s oversight board determined that a Facebook video falsely alleging that U.S. President Joe Biden is a pedophile did not violate the company’s current rules, but expressed that the rules were “disjointed”. It was acknowledged that the focus is too narrow on AI-generated content.

The board, which is funded by Facebook’s parent company Meta but operates independently, took on the Biden video case in October after receiving user complaints about a doctored seven-second video of the president.


The board ruled that under current policies, the misleading altered video would only be prohibited if it was created by artificial intelligence or made to appear to say words that were not actually said. Therefore, Meta was correct in continuing to publish the video.

This ruling is the first to criticize Meta’s policies against “manipulated media” amidst concerns about the potential use of new AI technology to influence upcoming elections.

The board stated that the policy “lacks a convincing justification, is disjointed and confusing to users, and does not clearly articulate the harms it seeks to prevent.” It suggested updating the policy to cover both audio and video content, and to apply a label indicating that it has been manipulated, regardless of whether AI is used.

It did not require the policy to apply to photos, as doing so could make enforcement too difficult at Meta’s scale.

Meta, which also owns Instagram and WhatsApp, informed the board that it plans to update its policies to address new and increasingly realistic advances in AI, according to the ruling.

The video on Facebook is a manipulated version of real footage of Biden exchanging “I voted” stickers with his granddaughter and kissing her on the cheek during the 2022 US midterm elections.

The board noted that non-AI modified content is “more prevalent and not necessarily less misleading” than content generated by AI tools.

It recommended that enforcement should involve applying labels to content, rather than Meta’s current approach of removing posts from the platform.

The company announced that it is reviewing the ruling and will respond publicly within 60 days.

Source: www.theguardian.com