British MPs Warn of Potential Violence in 2024 Due to Unchecked Online Misinformation

Members of Parliament have cautioned that if online misinformation is not effectively tackled, it is “just a matter of time” before viral content leads to a resurgence of violence in the summer of 2024.

Chi Onwurah, chair of the Commons science and technology select committee, expressed concern that ministers seem complacent regarding the threat, placing public safety in jeopardy.

The committee voiced its disappointment with the government’s reaction to a recent report indicating that the business models of social media companies are contributing to unrest following the Southport murders.

In response to the committee’s findings, the government dismissed proposals for legislation aimed at generative artificial intelligence platforms, maintaining that it would refrain from direct intervention in the online advertising sector, which MPs argued has fostered the creation of harmful content post-attack.

Onwurah noted that while the government concurs with most conclusions, it fell short of endorsing specific action recommendations.

Onwurah accused ministers of compromising public safety, stating: “The government must urgently address the gaps in the Online Safety Act (OSA); instead, it seems satisfied with the harm caused by the viral proliferation of legal but detrimental misinformation. Public safety is at stake, and it’s only a matter of time before we witness a repeat of the misinformation-driven riots of summer 2024.”

In their report titled ‘Social Media, Misinformation and Harmful Algorithms’, MPs indicated that inflammatory AI-generated images were shared on social media following the stabbing that resulted in the deaths of three children, warning that AI tools make it increasingly easier to produce hateful, harmful, or misleading content.

In a statement released by the commission on Friday, the government stated that no new legislation is necessary, insisting that AI-generated content already falls under the OSA, which regulates social media content. They argued that new legislation would hinder its implementation.

However, the committee highlighted Ofcom’s evidence, where officials from the communications regulator admitted that AI chatbots are not fully covered by the current legislation and that further consultation with the tech industry is essential.

The government also declined to take prompt action regarding the committee’s recommendation to establish a new entity aimed at addressing social media advertising systems that allow for the “monetization of harmful and misleading content,” such as misinformation surrounding the Southport murders.

In response, the government acknowledged concerns regarding the lack of transparency in the online advertising market and committed to ongoing reviews of industry regulations. They added that stakeholders in online advertising seek greater transparency and accountability, especially in safeguarding children from illegal ads and harmful products and services.

Addressing the commission’s request for additional research into how social media algorithms amplify harmful content, the government stated that Ofcom is “best positioned” to determine if an investigation should be conducted.

In correspondence with the committee, Ofcom indicated that it has begun working on a recommendation algorithm but acknowledged the necessity for further exploration across a broader spectrum of academic and research fields.

The government also dismissed the commission’s call for an annual report to Parliament concerning the current state of online misinformation, arguing that it could hinder efforts to curtail the spread of harmful online information.

The British government defines misinformation as the careless dissemination of false information, while disinformation refers to the intentional creation and distribution of false information intended to cause harm or disruption.

Onwurah highlighted concerns regarding AI and digital advertising as particularly troubling. “Specifically, the inaction on AI regulation and digital advertising is disappointing,” she stated.

“The committee remains unconvinced by the government’s assertion that the OSA adequately addresses generative AI, and this technology evolves so swiftly that additional efforts are critically needed to manage its impact on online misinformation.

“And how can we combat that without confronting the advertising-driven business models that incentivize social media companies to algorithmically amplify misinformation?”

Source: www.theguardian.com

Leave a Reply

Your email address will not be published. Required fields are marked *