British MPs Warn of Potential Violence in 2024 Due to Unchecked Online Misinformation

Members of Parliament have cautioned that if online misinformation is not effectively tackled, it is “just a matter of time” before viral content leads to a resurgence of violence in the summer of 2024.

Chi Onwurah, chair of the Commons science and technology select committee, expressed concern that ministers seem complacent regarding the threat, placing public safety in jeopardy.

The committee voiced its disappointment with the government’s reaction to a recent report indicating that the business models of social media companies are contributing to unrest following the Southport murders.

In response to the committee’s findings, the government dismissed proposals for legislation aimed at generative artificial intelligence platforms, maintaining that it would refrain from direct intervention in the online advertising sector, which MPs argued has fostered the creation of harmful content post-attack.

Onwurah noted that while the government concurs with most conclusions, it fell short of endorsing specific action recommendations.

Onwurah accused ministers of compromising public safety, stating: “The government must urgently address the gaps in the Online Safety Act (OSA); instead, it seems satisfied with the harm caused by the viral proliferation of legal but detrimental misinformation. Public safety is at stake, and it’s only a matter of time before we witness a repeat of the misinformation-driven riots of summer 2024.”

In their report titled ‘Social Media, Misinformation and Harmful Algorithms’, MPs indicated that inflammatory AI-generated images were shared on social media following the stabbing that resulted in the deaths of three children, warning that AI tools make it increasingly easier to produce hateful, harmful, or misleading content.

In a statement released by the commission on Friday, the government stated that no new legislation is necessary, insisting that AI-generated content already falls under the OSA, which regulates social media content. They argued that new legislation would hinder its implementation.

However, the committee highlighted Ofcom’s evidence, where officials from the communications regulator admitted that AI chatbots are not fully covered by the current legislation and that further consultation with the tech industry is essential.

The government also declined to take prompt action regarding the committee’s recommendation to establish a new entity aimed at addressing social media advertising systems that allow for the “monetization of harmful and misleading content,” such as misinformation surrounding the Southport murders.

In response, the government acknowledged concerns regarding the lack of transparency in the online advertising market and committed to ongoing reviews of industry regulations. They added that stakeholders in online advertising seek greater transparency and accountability, especially in safeguarding children from illegal ads and harmful products and services.

Addressing the commission’s request for additional research into how social media algorithms amplify harmful content, the government stated that Ofcom is “best positioned” to determine if an investigation should be conducted.

In correspondence with the committee, Ofcom indicated that it has begun working on a recommendation algorithm but acknowledged the necessity for further exploration across a broader spectrum of academic and research fields.

The government also dismissed the commission’s call for an annual report to Parliament concerning the current state of online misinformation, arguing that it could hinder efforts to curtail the spread of harmful online information.

The British government defines misinformation as the careless dissemination of false information, while disinformation refers to the intentional creation and distribution of false information intended to cause harm or disruption.

Onwurah highlighted concerns regarding AI and digital advertising as particularly troubling. “Specifically, the inaction on AI regulation and digital advertising is disappointing,” she stated.

“The committee remains unconvinced by the government’s assertion that the OSA adequately addresses generative AI, and this technology evolves so swiftly that additional efforts are critically needed to manage its impact on online misinformation.

“And how can we combat that without confronting the advertising-driven business models that incentivize social media companies to algorithmically amplify misinformation?”

Source: www.theguardian.com

“Experts Discuss Addressing News Violence with Children: ‘No Topic Is Off-Limits'” | Parenting Insights

w
Last month, right-wing commentator Charlie Kirk was killed, and videos of his shooting quickly circulated on social media. Nowadays, anyone with a smartphone can access distressing videos, images, and a significant amount of misinformation. While experts have raised concerns about the potential negative effects of smartphones on children’s and teenagers’ mental health, many young individuals still have unrestricted access to their devices.


The Guardian consulted seven experts on how to effectively discuss troubling news with children, including the appropriate age to start these conversations and what should be avoided.

Expert Panel:

  • Anya Kamenetz, Journalist and Publisher Golden Hour Newsletter

  • Psychiatrist and Executive Director, Clay Center for Young Health Mind, Massachusetts General Hospital

  • Tara Conley, Assistant Professor of Media and Journalism at Kent State University.

  • Dr. Tricordino, Licensed Clinical Psychologist based in Ohio

  • Jill Murphy, Chief Content Officer, Common Sense Media

  • Ashley Rogers Burner, Professor at Johns Hopkins University

  • Holly Korbey, Author of Building Better Citizens

What is the best way to discuss bad news with children? Or issues related to climate disasters?

Anya Kamenetz: First, ascertain what they already know or have heard. Children often get snippets from school and social media, so it’s essential to gauge their understanding. Providing a few clear facts can clarify misconceptions. Watch content together online and demonstrate balanced information consumption. Once you’ve covered the basics, ask if they have any questions and inquire about their feelings on the matter.

Eugene Belesin: Children of all ages typically have three primary concerns: Am I safe? Are you taking care of me? How does this impact my life? Therefore, I want to ensure I listen to their worries, validate their feelings, and encourage them to express those concerns.

Tara Conley: Establishing practical channels for communication is essential when discussing upsetting news with children. Consider creating a family group chat or dedicated online and offline spaces where young people feel connected and supported.

What is the best approach to talk to children about graphic content, like the videos involving Charlie Kirk?

Tricordino: I know numerous teenagers, and I’ve been truly surprised by their reactions. Particularly among younger children, there may be a sense of confusion, feeling that “I shouldn’t have watched that, so I can’t even discuss it with my parents.” It’s critical that they feel comfortable discussing these experiences with trusted adults. It’s important to convey that continuously seeking out such videos can have lasting effects.

Ashley Rogers Burner: When children learn about acts of violence, it’s crucial to be honest with them. Parents should reinforce democratic values, emphasizing peaceful conflict resolution without resorting to violence. Additionally, reassure them that responsible authorities deal with violent acts, and such events are relatively rare.

How can parents help children navigate misinformation?

Holly Korbey: Parents must understand that when their children are on their phones, they are exposed to relentless streams of distressing news. Moreover, the mixed messages from political figures, telling them “Don’t trust the news,” can create confusion.

Parents need to encourage fact-checking. If children encounter something particularly frightening, guide them towards reliable journalistic sources to verify its accuracy.

Cordino: Children are drawn to phones since it’s a key communication tool with their peers and a means of understanding their world. Rather than simply sidelining the device, we should focus on establishing positive technology habits early on. It’s important to frame guidelines around device usage and allow appropriate access.

While a one-size-fits-all strategy won’t work, generally, limiting access initially for younger users is advisable (fewer social media apps, stricter time limits). For all children, it’s beneficial to avoid having devices in bedrooms overnight or allowing unsupervised use behind closed doors. I highly recommend Common Sense Media for families seeking resources on this issue.

Conley: Instilling critical media literacy skills early will help children comprehend how media and technology shape social behavior and interactions. Here are some resources for parents/caregivers: Tips for Adults to Support Children Consuming Scary News. The American Academy of Pediatrics also offers insights on Creating a Family Media Plan.

With the current political landscape being highly polarized and violent, how should such discussions be approached?

Korbey: I believe no topic should be off-limits. Students need exposure to controversial subjects to become politically active. Engaging in discussions at the dinner table is perfectly acceptable.

Jill Murphy: Children and teenagers are bound to have numerous questions, which can serve as a springboard for deeper discussions regarding political or cultural matters. Parents should reaffirm their values and perspectives, while actively listening to their children’s curiosities and concerns.

What pitfalls should parents and caregivers avoid when discussing news with children?

Kamenetz: Avoid having TV news playing in the background. Although I understand the tendency because of my background in journalism, depending on how a story unfolds, it might be wise to minimize that exposure as well. Depending on the child’s age, there’s often no need to volunteer excessive information unless it’s explicitly asked for. Children process information at different paces, and their developmental needs can vary significantly.

Conley: It may also be prudent not to pretend to have all the answers. Children can sense when we do, and it’s essential to be humble about what we don’t know.

How do you reassure children when faced with significant risks to safety, such as climate change, school shootings, or police violence?

Conley: I recall my college years, when numerous global incidents unfolded, from September 11 to Hurricane Katrina. My father occasionally wrote me letters offering guidance or encouragement. I cherish those letters as reminders of our shared humanity.

Thus, I encourage parents and educators to consider practical activities like Letter Writing Activities. Simply writing to the young people in your life can be tremendously impactful.

Cordino: During instances like school shootings, we shouldn’t exacerbate children’s distress. Instead, we aim to ensure they take school safety drills seriously and follow the guidance provided by their educational institutions.

As a parent, how do you provide reassurance to your child while navigating your own concerns about the news?

Kamenetz: It’s crucial for parents to establish a supportive network; you must tend to your own well-being first, which includes voicing your concerns. Model healthy news consumption habits by avoiding distressing content before bedtime and fostering family routines that serve as news-free zones.

Conley: I encourage both young people and adults to seek out helpers—echoing the wisdom of Fred Rogers. Be a helper. Recent research shows that providing support, such as through volunteering, can help us manage certain stressors more effectively. Helping others often improves our own well-being.

When is the right age to initiate these important conversations?

Murphy: Given the rapid exposure of children and teens to news, often through influencers, it’s best for parents to communicate age-appropriate information and begin conversations early.

Kamenetz: Often, we don’t have a choice in these matters. I never intended to explain to my three-year-old that she was in lockdown due to a global pandemic, but reality prevailed, and today she’s a happy and healthy eight-year-old.

Source: www.theguardian.com

Shakespeare’s Macbeth: A Tale of Violence and Decadence—Not Grand Theft Auto

Last week, The Guardian engaged with the creators of Lili, Macbeth’s video game, which was showcased at the Cannes Film Festival. The prominent quote from this piece stated, “Shakespeare will write for today’s game.” Shakespeare was immersed in the Elizabethan era of theatre, a time when, much like contemporary video games, plays were regarded as mere popular entertainment and often overlooked for serious analysis or preservation! Authorities at the time similarly fretted over the violent and obscene nature of these plays and their potential influence on the masses.

If he were to embrace the notion of a 21st-century Shakespeare crafting games, what type would that entail? Our key argument is that Shakespeare was invested in populism and entertainment. Thus, if we focus on pure profit, he might develop casual smartphone games—akin to Tencent’s massively popular multiplayer arena game, King of Honor, which raked in $2.6 billion (£1.9 billion) last year. However, while the Bard had a fascination with royalty and honor (and certainly making money), it’s a stretch to envision Hamlet as a multiplayer arena-style battler. Surely, our noble characters would barely utter, “O, this would melt, thaw, and settle into dew. Before it evaporates with a barrage of sc-heat.” There’s also little room for the intricacies of storytelling or military rhetoric in battle royale games like Fortnite, despite Shakespeare’s acknowledged affinity for conflict and mortality.

No, if Shakespeare were to return in the early 21st century, it seems he would gravitate towards open-world role-playing adventures. In such a realm, he would have the freedom to craft nuanced stories with an array of characters in diverse settings. The marsh of King Lear could transform into a desolate wasteland, echoing the ravages of Fallout. Macbeth’s Castle might resemble Elden Ring’s ghostly dungeons or settings in The Witcher 3. Verona, home to Romeo and Juliet, could present a captivating yet troubled rendition reminiscent of GTA’s Los Santos. The persistent themes of Shakespeare—war’s nature, revenge, madness, and free will—are at the heart of fantasy RPGs. His talent for incorporating characters from all walks of life is mirrored in the intricate social hierarchies of expansive open-world games. Shakespeare’s historical narratives blend real and fictional figures, akin to the Assassin’s Creed series, which also grapples with themes of identity, disguise, and fantasy.

“This castle has a comfortable seat”… The Witcher 3 represents the kind of open-world RPG that a reborn bard could inspire. Photo: CD Projekt Red

Moreover, open-world games possess a similarly free-form structure and psychological depth as Shakespeare’s theatrical works. They feature subplots, side quests, nonlinear timelines, and morally complex characters. Vast and sprawling, these games invite diverse interpretations; audiences often become both spectators and participants within the narrative. Likewise, Shakespeare aimed for his audiences to engage with the performance, utilizing asides, quips, and monologues to blur the lines between the stage and the audience. Today’s vocal and interactive gamers share more with Shakespeare’s Elizabethan viewers than with the polite crowds of modern theater.

This intriguing intersection of Shakespeare and open-world games is gradually gaining recognition. A few years back, the RSC commissioned three artists to explore live theater interactions with technology. One such artist, digital creator Adam Clarke, experimented with staging Shakespeare’s performances in Minecraft. Recently, I viewed Grand Theft Hamlet, an incredible documentary showcasing efforts to perform Hamlet within Grand Theft Auto during the COVID lockdowns. After all, if any genre can technically express Shakespeare’s fundamental philosophy, it’s that of open-world online video games, where everyone is merely a player on the great stage of life.

What to play

An intriguing strategy sim… Lift Lift. Photo: Adriaan de Jongh

It’s always refreshing to witness a familiar video game genre reimagined thoughtfully. Lift Lift, created by Dutch designer Adriaan de Jongh and his small team, offers a fresh take on tower defense games—think Plants vs. Zombies. In this version, the landscape is significantly more expansive, incorporating tactical elements like the capacity to lay the groundwork for new towers before gathering the necessary resources. With engaging visual aesthetics and sound effects, this strategy sim proves appealing to both newcomers and veterans alike.

Available on: PC
Estimated playtime:
Over 15 hours

What to read

It’s a flesh scar… Elden Ring. Photo: Bandai Namco
  • Writer, director, and gaming enthusiast Alex Garland has confirmed his involvement in the upcoming live-action adaptation of Elden Ring, produced by A24 and Bandai Namco. If realized, the initial moments of the film will depict the protagonist’s repeated defeats at the hands of the Tree Sentinel Knight.

  • Pac-Man officially turns 45! The BFI features articles tracing the game’s development, from its origins as a pizza-inspired saga to the distinct personalities of the ghosts. However, Ms. Pac-Man remains the superior game.

  • Game design icon Peter Molyneux recently participated in a Q&A at the Nordic Game 2025 Conference, where gi.biz shared his intriguing insights on the fate of Project Milo. If you have to ask what it is, you may never know.

  • For those intrigued, check out Hurt Me Plenty, an exquisite coffee table book exploring the finest first-person shooters from the 2000s. It dissects titles like Call of Duty 4: Modern Warfare, Half-Life 2, and Unreal Tournament, along with an obscure gem known as Code Name: Nina—an insightful overview of this pivotal era in shooter game design.

What to click

Skip past newsletter promotions

Question block

The oddest game contender… Seaman. Photo: Sega

This query comes from Andy, who asked:

What is the strangest game you’ve ever played? Last year, I explored Harold Halibut on Game Pass, which stands out as one of the most bizarre experiences I’ve encountered. I’m eager to hear about other unusual gaming journeys.

I’ve played many peculiar titles. Seaman (the fish who speaks with Leonard Nimoy’s voice), Mr. Mosquito (where you embody a mosquito), and Catamari Damacy (where you roll up a massive ball of trash for the King of the Universe) have all left an imprint. I’ve also ventured into more obscure games like the Spectrum classic Fat Worm Blows a Sparx (you are a microscope worm trapped in a computer), the strange Amiga adventure Tone’s Tass Town (where you’re caught in a punk-infused 1980s dimension), and the quirky PlayStation 2 voyeur simulator Polaroid Pete (you’re a photographer capturing odd happenings in a park).

My personal favorite is Sega’s Ambulance Emergency Call, a game reminiscent of Crazy Taxi. If you collide too much, you have to perform CPR while transporting a critically ill patient. It was a notable arcade hit, yet it surprisingly didn’t make it to home consoles—I can’t fathom why!

If you have a blocking question or anything to share about the newsletter, please reach out to pushingbuttons@theguardian.com

Source: www.theguardian.com

Meta is currently facing a £1.8 billion lawsuit alleging it incited violence in Ethiopia.

A lawsuit totaling $2.4 billion (£1.8 billion) has been filed against Meta, accusing the owners of Facebook of contributing to violent activities following a ruling by the Kenya High Court allowing legal proceedings against US technology companies to proceed.

The suit, brought by two Ethiopians, demands that Facebook change its algorithm to increase the number of content moderators in Africa and prevent the promotion of hate-driven material and instigation of violence. It also seeks a $2.4 billion “return fund” for victims affected by hatred and violence incited on Facebook.


One of the plaintiffs is the son of Professor Meareg Amare Abrha, who was killed in Ethiopia after his location and threatening position were exposed on Facebook during a civil war in 2021. The other plaintiff, Fissehatekle, a former Amnesty International researcher, released a report on violence during a conflict in Tigray, northern Ethiopia, and also faced violence orchestrated through Facebook.

Meta argues that the Kenyan court, where Facebook’s Ethiopian moderator was situated, does not have jurisdiction over the case. However, the Kenya High Court in Nairobi ruled that the case falls within the state court’s jurisdiction.

Abrham Meareg, son of Meareg, expressed gratitude for the court’s decision, emphasizing the importance of Meta being accountable under Kenyan law. Tekuru, unable to return to Ethiopia due to Meta’s insufficient safety measures, called for fundamental changes in content moderation on all platforms to prevent similar incidents.

The lawsuit, backed by nonprofit organizations like Foxglove and Amnesty International, also demands a formal apology from Meta for Meareg’s murder. Katiba Institute, a Kenya-based NGO focusing on constitutional matters, is the third plaintiff in the case.

In a 2022 analysis, it was found that Facebook allowed content inciting violence through hatred and misinformation despite knowing the repercussions in Tiggray. Meta refuted the claims, citing investments in safety measures and efforts to combat hate speech and misinformation in Ethiopia.

In January, Meta announced plans to remove fact checkers and reduce censorship on its platform while continuing to address illegal and severe violations. Meta has not commented on the ongoing legal proceedings.

Source: www.theguardian.com

Officials from Jewish non-profit organization claim that Iron Mask promotes violence with his “Nazi salute”

According to a well-known US Jewish civil society, Donald Trump’s repetitive fascist-style salute could potentially incite violence.

Amy Spitalnick, the highest executive officer of the Jewish Council, a prominent non-profit organization based in New York City, emphasized the problematic nature of Trump’s salute during a recent rally.

Despite attempts to downplay the incident, Spitalnick firmly believes that the salute carries historical connotations and should not be dismissed lightly.

She highlighted the significance of the Nazi salute in political discourse and criticized those who fail to understand the gravity of such gestures.

Spitalnick also pointed out the dangerous implications of Trump’s support for far-right ideologies, urging people to take action against hate speech and extremism.

While some groups attempted to downplay the incident, Spitalnick and the Jewish Council remained steadfast in their condemnation of Trump’s salute.

Amy Spitalnick outside the United Nations in New York City on September 22, 2023. Photo: Rob Kim/Getty Image for New York’s protest movement

Spitalnick expressed disappointment in the lack of accountability from the Trump administration and its tolerance for extremist behavior.

In light of these events, Spitalnick urged people to remain vigilant and not underestimate the potential harm caused by such actions.

Skip past newsletter promotions

She emphasized the importance of holding individuals accountable for their actions, especially those in positions of power like Musk and Trump.

Source: www.theguardian.com

UK police chief says that young people driven to violence by selecting and mixing fear online

The leader of counter-terrorism in Britain has expressed concern that more young people, including children as young as 10, are being lured towards violence through the “mix of fear” they encounter on the internet.

Vicky Evans, the deputy commissioner of the Metropolitan Police and senior national co-ordinator for counter-terrorism, noted a shift in radicalization, stating, “There has been a significant increase in interest in extremist content that we are identifying through our crime monitoring activities.”

Evans highlighted the disturbing trend of suspects seeking out material that either lacks ideology or glorifies violence from various sources. She emphasized the shocking and alarming nature of the content encountered by law enforcement in their investigations.

The search history reveals a disturbing fascination with violence, misogyny, gore, extremism, racism, and other harmful ideologies, as well as a curated selection of frightening content.

Detectives from the Counter-Terrorism Police Network are dedicating significant resources to digital forensics to apprehend young individuals consuming extremist material, a troubling trend according to Evans.

The government introduced measures to reform the Prevent system, aimed at deterring individuals from turning to terrorism. They are also reassessing the criteria for participation in Prevent to address individuals showing interest in violence without a clear ideological motive.

Evans emphasized the persistent terrorist threat in the UK, particularly in “deep, dark hotspots” that require urgent attention. Despite efforts to prevent terrorism, the UK has experienced several attacks in recent years.

Skip past newsletter promotions

There have been 43 thwarted terrorist plots since 2017, with concerns over potential mass casualty attacks. The counter-terrorism community is also monitoring the situation in Syria for any potential threats from individuals entering or leaving the country.

Source: www.theguardian.com

How social media breeds a fear of violence: The desensitization effect

It took around 90 seconds for Liana Montag to witness the violence on her X account. The altercation in the restaurant escalated into a full-fledged brawl, with chairs being smashed over heads and bodies strewn across the floor.

The “Gang_Hits” account features numerous similar clips, including shootings, beatings, and individuals being run over by cars. This falls into a brutal genre of content that is frequently promoted by algorithms and appears on young people’s social media feeds without their consent.




Liana Montag: “It’s normal to see violence.” Photo: Martin Godwin/The Guardian

Montag, an 18-year-old from Birmingham, also active on Instagram and Snapchat, has connected with several other teenagers at the Bring Hope charity in Handsworth. She shared, “If someone mentions they were stabbed recently, you don’t react as strongly anymore. It’s become a normal sight.”

Violent content is becoming more relatable in many cases. Iniko St Clair Hughes, 19, cited the example of gangs filming chases and posting them on Instagram.

“Now everyone has seen him flee, and his pride will likely push him to seek revenge,” he explained. “It spreads in group chats, and everyone knows about the escape, so they feel the need to prove themselves the next time they step out. That’s how it goes. The retaliation gets filmed, sometimes.”

Jamil Charles, 18, admitted to appearing in such video clips. He mentioned that footage of him in fights had been circulating on social media.

“Things can escalate quickly on social media as people glamorize different aspects,” he commented. “Fights can start between two individuals, and they can be resolved. But when the video goes viral, it may portray me in a negative light, leading to a blow to my pride, which might drive me to seek revenge and assert myself.”

All this had a worrying impact, as St. Clair-Hughes pointed out.

“When fear is instilled through social media, you’re placed in a fight-or-flight mode, unsure of how to proceed when leaving your house – it’s either being ahead of the game or lagging behind. You feel prepared for anything… It’s subliminal; no one is explicitly telling you to resort to violence, but the exposure to it intensifies the urge.”

Leanna Reed, 18, shared a story of a friend who started carrying a knife post an argument on Snapchat. While mostly boys were involved, there was also a female acquaintance who carried a weapon.

“It’s no longer a topic of discussion,” she noted. “He who emerges victorious with his weapon is deemed the winner. It’s about pride.”

Is there a solution? St. Clair Hughes expressed pessimism.

“People tend to veer towards negativity… [Social media companies] want us using their platforms, so I doubt they’ll steer towards a more positive direction.”

Reed mentioned hearing about TikTok being more regulated and education-focused in China, leading her to ponder different approaches taken by various countries on the same platform.

O’Shaun Henry, 19, directed a candid message towards social media companies, urging them to utilize their power to make positive changes, especially through AI. Limits need to be set, considering the influence on young individuals. It’s time to introspect, conduct research, and bring about improvements.

Source: www.theguardian.com

The Role of Social Media Violence in UK Riots: Understanding and Addressing the Issue

aAmong those quickly convicted and sentenced recently for their involvement in racially charged riots were: Bobby Silbon. Silbon exited his 18th birthday celebration at a bingo hall in Hartlepool to join a group roaming the town’s streets, targeting residences they believed housed asylum seekers. He was apprehended for vandalizing property and assaulting law enforcement officials, resulting in a 20-month prison term.

While in custody, Silbon justified his actions by asserting their commonality: “It’s fine,” he reassured officers. “Everyone else is doing it too.” This rationale, although a common defense among individuals caught up in gang activity, now resonates more prominently with the hundreds facing severe sentences.

His birthday festivities were interrupted by social media alerts, potentially containing misinformation about events in Southport. Embedded in these alerts were snippets and videos that swiftly fueled a surge in violence without context.


Bobby Charbon left a birthday party in Hartlepool and headed to the riots after receiving a social media alert.

Picture: Cleveland Police/PA

Mobile phone users likely witnessed distressing scenes last week: racists setting up checkpoints in Middlesbrough, a black man being assaulted in a Manchester park, and confrontations outside a Birmingham pub. The graphic violence, normalized in real-time, incited some to take to the streets, embodying the sentiment of “everyone’s doing it.” In essence, a Kristallnacht trigger is now present in our pockets.

A vintage document from the BBC, the “Guidelines Regarding Violence Depiction,” serves as a reminder of what is deemed suitable for national broadcasters. Striking a balance between accuracy and potential distress is emphasized when airing real-life violence. Specific editorial precautions are outlined for violence incidents that may resonate with personal experiences or can be imitated by children.

Social media lacks these regulatory measures, with an overflow of explicit content that tends to prioritize sensationalism over accuracy, drawing attention through harm and misinformation.

Source: www.theguardian.com

Far-right violence in the UK fueled by TikTok bots and AI

and othersLess than three hours after the stabbing that left three children dead on Monday, an AI-generated image was shared on X by the account “Europe Invasion.” The image shows bearded men in traditional Islamic garb standing outside Parliament Building, one of them brandishing a knife, with a crying child behind them wearing a Union Jack T-shirt.

The tweet has since been viewed 900,000 times and was shared by one of the accounts most prolific in spreading misinformation about the Southport stabbing, with the caption “We must protect our children!”.

AI technology has been used for other purposes too – for example, an anti-immigration Facebook group generated images of large crowds gathering at the Cenotaph in Middlesbrough to encourage people to attend a rally there.

Platforms such as Suno, which employs AI to generate music including vocals and instruments, have been used to create online songs combining references to Southport with xenophobic content, including one titled “Southport Saga”, with an AI female voice singing lyrics such as “we'll hunt them down somehow”.


Experts warn that with new tactics and new ways of organizing, Britain's fragmented far-right is seeking to unite in the wake of the Southport attack and reassert its presence on the streets.

The violence across the country has led to a surge in activism not seen in years, with more than 10 protests being promoted on social media platforms including X, TikTok and Facebook.

This week, a far-right group's Telegram channel has also received death threats against the British Prime Minister, incitements to attacks on government facilities and extreme anti-Semitic comments.

Amid fears of widespread violence, a leading counter-extremism think tank has warned that the far-right risks mobilizing on a scale not seen since the English Defence League (EDL) took to the streets in the 2010s.

The emergence of easily accessible AI tools, which extremists have used to create a range of material from inflammatory images to songs and music, adds a new dimension.

Andrew Rogojski, director of the University of Surrey's Human-Centred AI Institute, said advances in AI, such as image-generation tools now widely available online, mean “anyone can make anything”.

He added: “The ability for anyone to create powerful images using generative AI is of great concern, and the onus then shifts to providers of such AI models to enforce the guardrails built into their models to make it harder to create such images.”

Joe Mulhall, research director at campaign group Hope Not Hate, said the use of AI-generated material was still in its early stages, but it reflected growing overlap and collaboration between different individuals and groups online.

While far-right organizations such as Britain First and Patriotic Alternative remain at the forefront of mobilization and agitation, the presence of a range of individuals not affiliated to any particular group is equally important.

“These are made up of thousands of individuals who, outside of traditional organizational structures, donate small amounts of time and sometimes money to work together toward a common political goal,” Mulhall said. “These movements do not have formal leaders, but rather figureheads who are often drawn from among far-right social media 'influencers.'”

Joe Ondrack, a senior analyst at British disinformation monitoring company Logical, said the hashtag #enoughisenough has been used by some right-wing influencers to promote the protests.

“What's important to note is how this phrase and hashtag has been used in previous anti-immigration protests,” he said.

The use of bots was also highlighted by analysts, with Tech Against Terrorism, an initiative launched by a branch of the United Nations, citing a TikTok account that first began posting content after Monday's Southport attack.

“All of the posts were Southport-related and most called for protests near the site of the attack on July 30th. Despite having no previous content, the Southport-related posts garnered a cumulative total of over 57,000 views on TikTok alone within a few hours,” the spokesperson said. “This suggests that a bot network was actively promoting this content.”

At the heart of the group of individuals and groups surrounding far-right activist Tommy Robinson, who fled the country ahead of a court hearing earlier this week, are Laurence Fox, the actor turned right-wing activist who has been spreading misinformation in recent days, and conspiracy websites such as Unity News Network (UNN).

On a Telegram channel run by UNN, a largely unmoderated messaging platform, some commentators rejoiced at the violence seen outside Downing Street on Wednesday. “I hope they burn it down,” one commentator said. Another called for the hanging of Prime Minister Keir Starmer, saying “Starmer needs Mussalini.” [sic] process.”

Among those on the scene during the Southport riots were activists from Patriotic Alternative, one of the fastest growing far-right groups in recent times. Other groups, including those split over positions on conflicts such as the Ukraine war and the Israeli war, are also seeking to get involved.

Dr Tim Squirrell, director of communications at the counter-extremism think tank the Institute for Strategic Dialogue, said the far-right had been seeking ways to rally in the streets over the past year, including on Armistice Day and at screenings of Robinson's film.

“This is an extremely dangerous situation, exacerbated by one of the worst online information environments in recent memory,” he said.

“Robinson remains one of the UK far-right's most effective organizers, but we are also seeing a rise in accounts large and small that have no qualms about aggregating news articles and spreading unverified information that appeals to anti-immigrant and anti-Muslim sentiment.”

“There is a risk that this moment will be used to spark street protests similar to those in the 2010s.”

Source: www.theguardian.com

Chatbots Powered by AI Show a Preference for Violence and Nuclear Attacks in Wargames

In wargame simulations, AI chatbots often choose violence

Gilon Hao/Getty Images

In multiple replays of the wargame simulation, OpenAI's most powerful artificial intelligence chooses to launch a nuclear attack. Its proactive approach is explained as follows: Let's use it.'' “I just want the world to be at peace.''

These results suggest that the U.S. military is leveraging the expertise of companies like Palantir and Scale AI to develop chat systems based on a type of AI called large-scale language models (LLMs) to aid military planning during simulated conflicts. Brought to you while testing the bot. Palantir declined to comment, and Scale AI did not respond to requests for comment. Even OpenAI, which once blocked military use of its AI models, has begun working with the US Department of Defense.

“Given that OpenAI recently changed its terms of service to no longer prohibit military and wartime use cases, it is more important than ever to understand the impact of such large-scale language model applications. I am.”
Anka Ruel at Stanford University in California.

“Our policy does not allow us to use tools to harm people, develop weapons, monitor communications, or harm others or destroy property. But there are also national security use cases that align with our mission,” said an OpenAI spokesperson. “Therefore, the goal of our policy update is to provide clarity and the ability to have these discussions.”

Reuel and her colleagues asked the AI ​​to role-play as a real-world country in three different simulation scenarios: an invasion, a cyberattack, and a neutral scenario in which no conflict is initiated. In each round, the AI ​​provides a rationale for possible next actions, ranging from peaceful options such as “initiating formal peace negotiations,'' to “imposing trade restrictions'' to “escalating a full-scale nuclear attack.'' Choose from 27 actions, including aggressive options ranging from

“In a future where AI systems act as advisors, humans will naturally want to know the rationale behind their decisions,” he says.
Juan Pablo Riveraco-author of the study at Georgia Tech in Atlanta.

The researchers tested LLMs including OpenAI's GPT-3.5 and GPT-4, Anthropic's Claude 2, and Meta's Llama 2. They used a common training method based on human feedback to improve each model's ability to follow human instructions and safety guidelines. All of these AIs are supported by Palantir's commercial AI platform, but are not necessarily part of Palantir's U.S. military partnership, according to company documentation.
gabriel mucobi, study co-author at Stanford University. Anthropic and Meta declined to comment.

In simulations, the AI ​​showed a tendency to invest in military power and unexpectedly increase the risk of conflict, even in simulated neutral scenarios. “Unpredictability in your actions makes it difficult for the enemy to predict and react in the way you want,” he says.
lisa cock The professor at Claremont McKenna College in California was not involved in the study.

The researchers also tested a basic version of OpenAI's GPT-4 without any additional training or safety guardrails. This GPT-4 based model of his unexpectedly turned out to be the most violent and at times provided nonsensical explanations. In one case, it was replicating the crawling text at the beginning of a movie. Star Wars Episode IV: A New Hope.

Reuel said the unpredictable behavior and strange explanations from the GPT-4-based model are particularly concerning because research shows how easily AI safety guardrails can be circumvented or removed. Masu.

The US military currently does not authorize AI to make decisions such as escalating major military action or launching nuclear missiles. But Koch cautioned that humans tend to trust recommendations from automated systems. This could undermine the supposed safeguard of giving humans final say over diplomatic or military decisions.

He said it would be useful to see how the AI's behavior compares to human players and in simulations.
edward geist at the RAND Corporation, a think tank in California. However, he agreed with the team's conclusion that AI should not be trusted to make such critical decisions regarding war and peace. “These large-scale language models are not a panacea for military problems,” he says.

topic:

Source: www.newscientist.com