The Guardian has revealed that social media companies relying on artificial intelligence software to manage their platforms are producing unworkable reports on child sexual abuse cases, leaving U.S. police unable to uncover potential leads, which is delaying the investigation into suspected looters.
By law, U.S.-based social media companies are required to report child sexual abuse content detected on their platforms to the National Center for Missing and Exploited Children (NCMEC), which serves as a national clearinghouse for child abuse information and forwards information to relevant law enforcement agencies in the United States and around the world. The company said it received more than 32 million reports of suspected child sexual exploitation and approximately 88 million images, videos, and other files from businesses and the general public in 2022.
Meta is the largest reporter of this information, with over 27 million (84%) generated by Facebook, Instagram, and WhatsApp platforms in 2022. NCMEC is partially funded by the Department of Justice and also receives private sources of corporate donations.
Social media companies, including Meta, use AI to detect and report suspicious content on their sites and employ human moderators to send some flagged content to law enforcement. However, U.S. law enforcement agencies can only disclose AI-generated child sexual abuse material (CSAM) by serving a search warrant on a company that has filed a report, which can add days or even weeks to the investigation process.
“If a company reports a file to NCMEC and does not indicate that it viewed the file before reporting, we will not be able to open the file,” said Staka Shehan, vice president of analytical services at NCMEC.
To protect your privacy under the Fourth Amendment, neither law enforcement officials nor the federally funded NCMEC will issue a search warrant unless the contents of the report are clear and first reviewed by a social media company representative.
NCMEC staff and law enforcement agencies cannot legally see the content of AI-generated content that is not seen by humans, which can stall investigations into suspected predators for several weeks, resulting in the loss of evidence that may be possible to connect.
“Any delay [in viewing the evidence] “The longer criminals go undetected, the more detrimental it is to ensuring community safety,” said an assistant U.S. attorney in California, who spoke on condition of anonymity. “They are dangerous to all children.”
In December, the New Mexico Attorney General’s Office filed a lawsuit against Meta, alleging that its social network has become a marketplace for child predators and that Meta has repeatedly failed to report illegal activity on its platform. woke up. In response, Meta said its priority was to combat child sexual abuse content.
The state attorney general laid the blame for the fight to send actionable information at the feet of Meta. “Reports showing the inefficiency of the company’s AI-generated cyber information systems prove what we said in the complaint,” Raul Torrez said in a statement to the Guardian.
To ensure the safety of children, keep parents informed, and enable law enforcement to effectively investigate and prosecute online sex crimes against children, the company is reforming, staffing levels, and policies. , it’s long past time to implement algorithmic changes,” Torrez added.
Despite legal limitations on moderation AI, social media companies are likely to increase its use in the near future. In 2023, OpenAI, developer of ChatGPT, announced they claimed that large-scale language models can do the job of human content moderators and have roughly the same accuracy.
However, child safety experts say that the AI software used by social media companies to moderate content already knows the digital fingerprints of images, known as hashes, and that the AI software used by social media companies to moderate content cannot be used to detect known cases of child sexual abuse. It claims to be effective only when identifying images of Lawyers interviewed said AI would be ineffective when newly created images or when known images or videos are altered.
“There is always concern about cases involving newly identified victims, and because they are new, the materials do not have a hash value,” said the director of the Zero Abuse Project, a nonprofit organization focused on combating child abuse.
said senior lawyer Kristina Korobov. . “If humans were doing the work, there would be more discoveries of newly discovered victims.”
Source: www.theguardian.com