A safety watchdog has warned that child sexual abuse images created by artificial intelligence tools are increasingly prevalent on the open web and have reached a critical point. According to the Internet Watch Foundation (IWF), the amount of AI-generated illegal content seen online in the past six months has surpassed the previous year’s total.
The IWF, which operates a UK hotline but has a global mandate, has observed a rise in AI-generated content publicly available on the internet rather than on the dark web accessible via a dedicated browser. The organization’s interim chief executive highlighted the sophistication of these images, indicating that the AI tools used were trained on images and videos of real victims.
The IWF noted that the landscape of AI-generated content has reached a tipping point, with uncertainties among safety watchdogs and authorities regarding the involvement of actual children. Reports of AI-generated child sexual abuse material increased significantly in the six months to September compared to the previous 12 months, with 74 cases identified as violating British law.
Various types of material identified by the IWF included AI images of real-life abuse victims, “deepfake” videos altering adult pornography to resemble CSAM, and instances where AI was used to manipulate images of clothed children found online. More than half of the reported AI-generated content was hosted on servers in Russia, the United States, Japan, and the Netherlands.
Reportedly, 8 out of 10 reports of illegal AI-generated images come from members of the public finding such content on public sites. Meanwhile, Instagram has introduced measures to combat sextortion, such as blurring nude images sent in direct messages and promoting caution when sending or receiving such images.
Instagram’s new feature will blur nude images in direct messages and provide options to block senders and report inappropriate content. The platform will enable this feature by default for teenage accounts globally, with additional safeguards against potential sextortion scams.
Source: www.theguardian.com