Campaigners for internet safety are calling on the UK Communications Regulator to restrict the application of artificial intelligence in essential risk assessments, following reports that Meta, founded by Mark Zuckerberg, intends to automate these checks.
Ofcom stated that it would “consider the concerns” outlined in the letters from campaigners, as highlighted in last month’s report, which indicated that up to 90% of all risk assessments for the owners of Facebook, Instagram, and WhatsApp would be conducted by AI.
Social media platforms are crucial in assessing how harm manifests on their services and how they can alleviate potential dangers, particularly regarding the protection of child users and the prevention of illegal content, in accordance with the UK’s online safety legislation. The risk assessment process is deemed a vital element of this law.
In correspondence addressed to Ofcom’s CEO, Melanie Dawes, organizations like the Molly Rose Foundation, NSPCC, and Internet Watch Foundation criticized the prospect of AI-led risk assessments as “a backward and bewildering move.”
They urged, “We recommend advocating publicly that risk assessments are rarely seen as ‘appropriate and sufficient.’
The letter also called on the watchdog to “confront the belief that the platform can opt to bypass the risk assessment process.”
A spokesperson from Ofcom remarked, “Who has completed, reviewed, or approved the risk assessment? We are taking the concerns raised in this letter into account and will respond in due course.”
After the newsletter promotion
Mehta commented that the letter misrepresented the company’s safety strategies, which focus on high standards and adherence to regulations.
A Meta spokesperson stated, “We have not relied on AI for making decisions regarding risk. Our specialists have developed tools that assist teams in determining when legal and policy obligations pertain to a specific product. We have enhanced our capability to manage harmful content with human-supervised technology, leading to significantly better safety outcomes.”
The Molly Rose Foundation initiated the letter after a report by US broadcaster NPR last month indicated that Meta’s algorithms and updated safety features had been predominantly approved by AI systems, bypassing human oversight.
An unnamed former Meta executive told NPR that this shift would enable companies to roll out app updates and features more rapidly on Facebook, Instagram, and WhatsApp; however, it raises concerns regarding the prevention of potential issues prior to the launch of new products, resulting in “increased risks” for users.
NPR also noted that Meta is exploring the possibility of automating reviews in sensitive areas, particularly concerning risks to young users and addressing the spread of misinformation.
Source: www.theguardian.com