British MPs Warn of Potential Violence in 2024 Due to Unchecked Online Misinformation

Members of Parliament have cautioned that if online misinformation is not effectively tackled, it is “just a matter of time” before viral content leads to a resurgence of violence in the summer of 2024.

Chi Onwurah, chair of the Commons science and technology select committee, expressed concern that ministers seem complacent regarding the threat, placing public safety in jeopardy.

The committee voiced its disappointment with the government’s reaction to a recent report indicating that the business models of social media companies are contributing to unrest following the Southport murders.

In response to the committee’s findings, the government dismissed proposals for legislation aimed at generative artificial intelligence platforms, maintaining that it would refrain from direct intervention in the online advertising sector, which MPs argued has fostered the creation of harmful content post-attack.

Onwurah noted that while the government concurs with most conclusions, it fell short of endorsing specific action recommendations.

Onwurah accused ministers of compromising public safety, stating: “The government must urgently address the gaps in the Online Safety Act (OSA); instead, it seems satisfied with the harm caused by the viral proliferation of legal but detrimental misinformation. Public safety is at stake, and it’s only a matter of time before we witness a repeat of the misinformation-driven riots of summer 2024.”

In their report titled ‘Social Media, Misinformation and Harmful Algorithms’, MPs indicated that inflammatory AI-generated images were shared on social media following the stabbing that resulted in the deaths of three children, warning that AI tools make it increasingly easier to produce hateful, harmful, or misleading content.

In a statement released by the commission on Friday, the government stated that no new legislation is necessary, insisting that AI-generated content already falls under the OSA, which regulates social media content. They argued that new legislation would hinder its implementation.

However, the committee highlighted Ofcom’s evidence, where officials from the communications regulator admitted that AI chatbots are not fully covered by the current legislation and that further consultation with the tech industry is essential.

The government also declined to take prompt action regarding the committee’s recommendation to establish a new entity aimed at addressing social media advertising systems that allow for the “monetization of harmful and misleading content,” such as misinformation surrounding the Southport murders.

In response, the government acknowledged concerns regarding the lack of transparency in the online advertising market and committed to ongoing reviews of industry regulations. They added that stakeholders in online advertising seek greater transparency and accountability, especially in safeguarding children from illegal ads and harmful products and services.

Addressing the commission’s request for additional research into how social media algorithms amplify harmful content, the government stated that Ofcom is “best positioned” to determine if an investigation should be conducted.

In correspondence with the committee, Ofcom indicated that it has begun working on a recommendation algorithm but acknowledged the necessity for further exploration across a broader spectrum of academic and research fields.

The government also dismissed the commission’s call for an annual report to Parliament concerning the current state of online misinformation, arguing that it could hinder efforts to curtail the spread of harmful online information.

The British government defines misinformation as the careless dissemination of false information, while disinformation refers to the intentional creation and distribution of false information intended to cause harm or disruption.

Onwurah highlighted concerns regarding AI and digital advertising as particularly troubling. “Specifically, the inaction on AI regulation and digital advertising is disappointing,” she stated.

“The committee remains unconvinced by the government’s assertion that the OSA adequately addresses generative AI, and this technology evolves so swiftly that additional efforts are critically needed to manage its impact on online misinformation.

“And how can we combat that without confronting the advertising-driven business models that incentivize social media companies to algorithmically amplify misinformation?”

Source: www.theguardian.com

Why is the proliferation of AI-generated content harming the internet unchecked? – Arwa Mahdawi

HWhat do you think, humans? My name is Arwa and I am a genuine member of this species homo sapiens. We are talking about 100% real people; meat space This is it. I am by no means an AI-powered bot. I know, I know. That's exactly what the bot says, isn't it? I think you'll just have to trust me on this matter.

By the way, the reason I have such a hard time pointing this out is because content created by real humans is becoming kind of a novelty these days. The internet is rapidly being overtaken by advances in AI. (It's not clear who coined the term, but “slop” is a sophisticated iteration of Internet spam: low-quality text, video, and images generated by AI.) recent analysis It is estimated that more than half of all English long-form posts on LinkedIn are generated by AI. Meanwhile, many news sites are secretly experimenting with AI-generated content, in some cases signed. Author generated by AI.

Slop is everywhere, but Facebook is actively sloshing strange AI-generated images, including bizarre depictions. Jesus was made of shrimp. Much of the AI-generated content is created by fraudsters looking to drive user engagement, rather than remove them from their platforms. fraudulent purpose – Facebook accepted it. A study conducted last year by researchers at Stanford and Georgetown found that Facebook's recommendation algorithm is accelerating. These AI-generated posts.

Meta also creates its own slops. In 2023, the company began introducing AI-powered profiles like Liv, a “proud black queer mom of two and truth teller.” These didn't get much attention until Meta executive Connor Hayes talked about them. financial times The company announced in December that it plans to fill its platform with AI characters. I don't know why he thought bragging that soon we'll have a platform full of AI characters talking to each other would work, but it didn't. Meta quickly deleted the AI ​​profile after it went viral.

For now, people like Liv may be gone from Meta, but our online future looks increasingly sloppy. The gradual “ensitization” of the Internet, as Cory Doctorow memorably called it, is accelerating. Let's pray that Shrimp Jesus will perform a miracle soon. we need that.

Source: www.theguardian.com