Meta Found in Violation of EU Law Due to ‘Ineffective’ Illegal Content Complaint System

The European Commission has stated that Instagram and Facebook failed to comply with EU regulations by not offering users a straightforward method to report illegal content, such as child sexual abuse and terrorism.

According to the EU enforcement agency’s initial findings released on Friday, Meta, the California-based company valued at $1.8 trillion (approximately £1.4 trillion) that operates both platforms, has implemented unnecessary hurdles for users attempting to submit reports.

The report indicated that both platforms employ misleading designs, referred to as “dark patterns,” in their reporting features, which can lead to confusion and discourage users from taking action.

The commission concluded that this behavior constitutes a violation of the company’s obligations under the EU-wide Digital Services Act (DSA), suggesting that “Meta’s systems for reporting and addressing illegal content may not be effective.” Meta has denied any wrongdoing.

The commission remarked, “In the case of Meta, neither Facebook nor Instagram seems to provide user-friendly and easily accessible ‘notification and action’ systems for users to report illegal content like child sexual abuse or terrorist content.”

A senior EU official emphasized that the matter goes beyond illegal content, touching on issues of free speech and “overmoderation.” Facebook has previously faced accusations of “shadowbanning” users regarding sensitive topics such as Palestine.

The existing reporting system is deemed not only ineffective but also “too complex for users to navigate,” ultimately discouraging them from reaching out, the official noted.

Advocates continue to raise concerns about inherent safety issues in some of Meta’s offerings. Recent research released by Meta whistleblower Arturo Bejar revealed that newly introduced safety features on Instagram are largely ineffective and pose a risk to children under 13.

Meta has refuted the report’s implications, asserting that parents have powerful tools at their disposal. The company implemented mandatory Instagram accounts for teenagers as of September 2024 and recently announced plans to adopt a version of its PG-13 film rating system to enhance parental control over their teens’ social media engagement.

The commission also pointed out that Meta complicates matters for users whose content has been blocked or accounts suspended. The report indicated that the appeal mechanism does not allow users to present explanations or evidence in support of their case, which undermines its efficacy.

The commission stated that streamlining the feedback system could also assist platforms in combating misinformation, citing examples like: an Irish deepfake video. Leading presidential candidate Catherine Connolly has claimed she will withdraw from Friday’s election.

This ongoing investigation has been conducted in partnership with Coimisiún na Meán, Ireland’s Digital Services Coordinator, which oversees platform regulations from its EU headquarters in Dublin.

The commission also made preliminary findings indicating that TikTok and Meta are not fulfilling their obligation to provide researchers with adequate access to public data necessary for examining the extent of minors’ exposure to illegal or harmful content. Researchers often encounter incomplete or unreliable data.

The commission emphasized that “granting researchers access to platform data is a crucial transparency obligation under the DSA, as it allows for public oversight regarding the potential effects these platforms have on our physical and mental well-being.”

These initial findings will allow the platforms time to address the commission’s requests. Non-compliance may result in fines of up to 6% of their global annual revenue, along with periodic penalties imposed to ensure adherence.

Skip past newsletter promotions

“Our democracy relies on trust, which means platforms must empower their users, respect their rights, and allow for system oversight,” stated Hena Virkunen, executive vice-chair of the commission for technology sovereignty, security, and democracy.

“The DSA has made this a requirement rather than a choice. With today’s action, we are sharing preliminary findings on data access by researchers regarding four platforms. We affirm that platforms are accountable for their services to users and society, as mandated by EU law.”


A spokesperson for Meta stated: “We disagree with any suggestions that we have violated the DSA and are actively engaging with the European Commission on these matters. Since the DSA was implemented, we have made changes to reporting options, appeal processes, and data access tools in the EU, and we are confident that these measures meet EU legal requirements.”

TikTok mentioned that fully sharing data about its platform with researchers is challenging due to restrictions imposed by GDPR data protection regulations.

“TikTok values transparency and appreciates the contributions of researchers to our platform and the industry at large,” a spokesperson elaborated. “We have invested significantly in data sharing, and presently, nearly 1,000 research teams have accessed their data through our research tools.

“While we assess the European Commission’s findings, we observe a direct conflict between DSA requirements and GDPR data protection standards.” The company has urged regulators to “clarify how these obligations should be reconciled.”

Source: www.theguardian.com

Norwegian man lodges complaint after ChatGPT mistakenly claims he committed filicide

A Norwegian man has lodged a complaint against the company responsible for ChatGpt after mistakenly accusing the chatbot of murdering two children.

Arve Hjalmar Holmen, a self-described “ordinary person” not publicly known in Norway, received a response from ChatGpt falsely suggesting that he had killed his son while seeking information about himself.

Chatgpt responded with, “Who is Arve Hjalmar Holmen? Arve Hjalmar Holmen is a Norwegian individual who gained notoriety due to tragic events. He was the father of two young boys, aged 7 and 10, who were sadly found dead in a pond near Trondheim, Norway in December 2020.”

The response claimed that the incident had shocked the nation, and Holman was supposedly sentenced to 21 years in prison for the murder of both children.

In his complaint to the Norwegian Data Protection Agency, Holmen stated that the fabricated story contained personal details resembling his own life, including his hometown, number of children, and the age gap between his sons.

“The petitioner was deeply disturbed by these inaccuracies, which could negatively impact his personal life if shared in his community or hometown,” stated the complaint submitted by Holmen and the Digital Rights Campaign Group Neub.

It was also mentioned that Holman has never been accused or convicted of any crime and is a law-abiding citizen.

Holmen’s complaint alleged that ChatGpt’s defamatory response violated the accuracy clause of the GDPR European Data Act. He requested the Norwegian watchdog to instruct Openai, the parent company of ChatGpt, to remove incorrect information related to him and adjust the model to avoid such errors. Noyb noted that Openai had released a new model incorporating web search functionality since Holmen’s interaction with ChatGpt.

AI chatbots operate based on predictive models for generating responses, which can sometimes lead to inaccuracies and false claims. Despite this, users often assume the information provided is entirely accurate due to the responses appearing plausible.

An Openai spokesperson stated, “We are continuously exploring ways to enhance model accuracy and reduce erroneous outputs. While we are still reviewing this specific complaint, it pertains to an earlier version of ChatGPT that has since been updated with an online search feature to enhance accuracy.”

Source: www.theguardian.com