Child safety experts have claimed that Apple lacks effective monitoring and scanning protocols for child sexual abuse materials on its platforms, posing concerns about addressing the increasing amount of such content associated with artificial intelligence.
The National Society for the Prevention of Cruelty to Children (NSPCC) in the UK has criticized Apple for underestimating the prevalence of child sexual abuse material (CSAM) on its products. Data obtained by the NSPCC from the police shows that perpetrators in England and Wales use Apple’s iCloud, iMessage, and FaceTime for storing and sharing more CSAM than in all other reported countries combined.
Based on information collected through a Freedom of Information request and shared exclusively with The Guardian, child protection organizations discovered that Apple was linked to 337 cases of child abuse imagery offenses recorded in England and Wales between April 2022 and March 2023. In 2023, Apple reported only 267 suspected instances of child abuse imagery globally to the National Centre for Missing and Exploited Children (NCMEC), contrasting with much higher numbers reported by other leading tech companies, with Google submitting over 1.47 million and Meta reporting more than 30.6 million, as per NCMEC reports mentioned in the Annual Report.
All US-based technology companies are mandated to report any detected cases of CSAM on their platforms to the NCMEC. Apple’s iMessage service is encrypted, preventing Apple from viewing user messages, similar to Meta’s WhatsApp, which reported about 1.4 million suspected CSAM cases to the NCMEC in 2023.
Richard Collard, head of child safety online policy at NSPCC, expressed concern over Apple’s discrepancy in handling child abuse images and urged the company to prioritize safety and comply with online safety legislation in the UK.
Apple declined to comment but referenced a statement from August where it decided against implementing a program to scan iCloud photos for CSAM, citing user privacy and security as top priorities.
In late 2022, Apple abandoned plans for an iCloud photo scanning tool called Neural Match, which would have compared uploaded images to a database of known child abuse images. This decision faced opposition from digital rights groups and child safety advocates.
Experts are worried about Apple’s AI system, Apple Intelligence, introduced in June, especially as AI-generated child abuse content poses risks to children and law enforcement’s ability to protect them.
Child safety advocates are concerned about the increase in AI-generated CSAM reports and the potential harm caused by such images to survivors and victims of child abuse.
Sarah Gardner, CEO of Heat Initiative, criticized Apple’s insufficient efforts in detecting CSAM and urged the company to enhance its safety measures.
Child safety experts worry about the implications of Apple’s AI technology on the safety of children and the prevalence of CSAM online.
Source: www.theguardian.com