British Campaigners Warn Against Meta’s Plans to Use Automation in Risk Assessment

Campaigners for internet safety are calling on the UK Communications Regulator to restrict the application of artificial intelligence in essential risk assessments, following reports that Meta, founded by Mark Zuckerberg, intends to automate these checks.

Ofcom stated that it would “consider the concerns” outlined in the letters from campaigners, as highlighted in last month’s report, which indicated that up to 90% of all risk assessments for the owners of Facebook, Instagram, and WhatsApp would be conducted by AI.

Social media platforms are crucial in assessing how harm manifests on their services and how they can alleviate potential dangers, particularly regarding the protection of child users and the prevention of illegal content, in accordance with the UK’s online safety legislation. The risk assessment process is deemed a vital element of this law.

In correspondence addressed to Ofcom’s CEO, Melanie Dawes, organizations like the Molly Rose Foundation, NSPCC, and Internet Watch Foundation criticized the prospect of AI-led risk assessments as “a backward and bewildering move.”

They urged, “We recommend advocating publicly that risk assessments are rarely seen as ‘appropriate and sufficient.’

The letter also called on the watchdog to “confront the belief that the platform can opt to bypass the risk assessment process.”

A spokesperson from Ofcom remarked, “Who has completed, reviewed, or approved the risk assessment? We are taking the concerns raised in this letter into account and will respond in due course.”

Skip past newsletter promotions

Mehta commented that the letter misrepresented the company’s safety strategies, which focus on high standards and adherence to regulations.

A Meta spokesperson stated, “We have not relied on AI for making decisions regarding risk. Our specialists have developed tools that assist teams in determining when legal and policy obligations pertain to a specific product. We have enhanced our capability to manage harmful content with human-supervised technology, leading to significantly better safety outcomes.”

The Molly Rose Foundation initiated the letter after a report by US broadcaster NPR last month indicated that Meta’s algorithms and updated safety features had been predominantly approved by AI systems, bypassing human oversight.

An unnamed former Meta executive told NPR that this shift would enable companies to roll out app updates and features more rapidly on Facebook, Instagram, and WhatsApp; however, it raises concerns regarding the prevention of potential issues prior to the launch of new products, resulting in “increased risks” for users.

NPR also noted that Meta is exploring the possibility of automating reviews in sensitive areas, particularly concerning risks to young users and addressing the spread of misinformation.

Source: www.theguardian.com

Campaigners urge not to ignore online safety laws for UK trade contracts

Campaigners for child safety have cautioned the government against including significant online regulations in the UK-US trade deal, labelling any potential compromise as a “disturbing betrayal” that goes against public sentiment.

The preliminary Trans-Atlantic Trade Agreement, despite objections from the White House, contains provisions to consider implementing online safety regulations, a move that could endanger freedom of speech, as reported on Thursday.

The Molly Rose Foundation, established by the relatives of Molly Russell, a British teenager who tragically ended her life after encountering harmful online content, expressed disappointment and dismay at the prospect of these regulations being used as bargaining chips in a trade agreement.

In a statement to business secretary Jonathan Reynolds, the MRF urged against continuing the troubling trend of compromising child safety.

Reports from the online newsletter Playbook revealed the commitment to enforce the Online Safety Act (OSA) alongside another law – Digital Markets, Competition and Consumer Law – with a focus on high-tech platforms.

This week, concerns were raised as the US State Department engaged with the UK communications regulator OFCOM regarding the potential impact on freedom of expression due to OSA.

The Online Safety Act is geared towards safeguarding children, mandating that individuals under 18 are shielded from harmful material like content related to self-harm and suicide. Companies found in violation of the Act can face hefty fines or service suspension in the UK.

Beevan Kidron, a crossbench peer and advocate for internet safety, criticized the Labour Party for potentially trading child safety guidelines for economic benefits. The NSPCC urged the government not to backtrack on commitments to enhance online safety for children.

When questioned in parliament about the inclusion of the Digital Safety and Competition Act and Digital Services Tax in trade discussions, the business secretary acknowledged differing opinions on issues like VAT but declined to delve into specifics. Sources close to Reynolds did not dispute the Playbook’s findings.

Peter Kyle, the Technology Secretary, affirmed the government’s stance on online security, asserting that protections for children and vulnerable individuals are non-negotiable.

A spokesperson for the prime minister reiterated the government’s steadfast position on online safety, emphasizing the importance of safeguarding children online and ensuring that illegal activities offline remain prohibited on the internet.

Source: www.theguardian.com