Elon Musk Emerges as a Key Figure in Far-Right Circles After Departing from the White House

Far-right activist Tommy Robinson expressed gratitude to a benefactor who supported his legal defense as he exited a London courtroom this week, following a judge’s ruling that acquitted him of terrorism charges.

“Mr. Elon Musk, I cannot thank you enough. Without your financial aid during my legal battle, I might have faced imprisonment,” Robinson remarked. “Thank you, Elon.”

Following Musk’s tumultuous departure from the White House, Tesla’s CEO suggested he was stepping away from politics. Investors keen for him to concentrate on his business were pleased, resulting in a rise in Tesla’s stock price. However, since then, Musk has shown no signs of abandoning his political pursuits; instead, he has immersed himself deeper into election matters and far-right views on immigration.

Musk’s political activities post-Trump administration have seen him leverage social media to sway the New York mayoral race and develop a right-wing, AI-generated alternative to Wikipedia. He remarked that the “homeless industrial complex” was detrimental to California and stated that “white pride” should be acknowledged. On X, he warned that Britain might plunge into civil war and that Western civilization faced collapse.

On the day Robinson conveyed his thanks, Musk made allegations on social media about “illegals” voting fraudulently in the U.S., amplified objections to video games labeled as “woke,” and characterized established news outlets as left-leaning propaganda.

The world’s wealthiest individual’s political inclinations are adversely affecting his business. A recent Yale University report indicated that his controversial remarks and engagement in far-right politics resulted in a loss of approximately 1 million Tesla car sales between his 2022 Twitter acquisition and April of this year. While consumer loyalty for Tesla remains high, recent data shows a decline, with Musk’s “Government Efficiency Division” dismantling agencies by 2025 and predicting around 14 million deaths globally.

Musk’s personal approval ratings also hit an all-time low this year, according to several surveys. As reported by an August Gallup poll, he ranks five points behind Israeli Prime Minister Benjamin Netanyahu, who faces war crimes charges, and a separate Quinnipiac Poll showed that only about 20% of American women view Musk favorably.

Despite social and economic repercussions from his political stance, Musk’s public backing of far-right ideologies continues unabated. In his typical defiant manner, he has become increasingly vocal about his affiliations, indicating that labels such as “racist” or “extremist” no longer concern him. Tesla shareholders have also shown support; on Thursday, they overwhelmingly approved his proposed $1 trillion compensation, marking the largest in the company’s history.

Shielded from substantial financial fallout and navigating a self-created online echo chamber, Musk has aligned himself with the global far-right, despite a diminishing role within the Trump administration.

Supporting the International Far Right

Musk has maintained cordial relations with several of the globe’s prominent far-right figures. This year, he appeared at a rally for Germany’s anti-immigration party, Alternative for Germany. Following Trump’s alleged fascist salute after his inauguration, Musk’s speech, which suggested that Germany should move beyond its historical sins, attracted criticism from Jewish leaders.

Recently, Musk has engaged with the pro-natalist political movement, connecting with far-right activists and influencers on X. Robinson, a long-standing anti-Islam figure with a history of legal troubles, has notably gained his attention.

“It’s time for the British to unite with strong figures like Tommy Robinson and fight for our survival, or we will all perish,” Musk proclaimed on X in response to a video showing a stabbing. “If we fail to alter the trajectory of illegal immigration, similar violence will occur throughout England,” he added.

In September, Musk participated in Robinson’s London rally via livestream, advocating for the disbanding of the British government and claiming that immigration was leading to “Britain’s destruction.”

“Violence will confront you regardless of your choice. Resist or face annihilation,” Musk told the attendees. His comments were later condemned by Downing Street, which cautioned that they risked inciting violence.

Overall, Musk’s acquisition of Twitter in 2022 benefited the UK far-right, enabling those previously expelled for violating hate speech policies to return. Musk reinstated accounts from notorious neo-Nazi groups, as well as Robinson’s account, which had been banned for advocating the mass deportation of Muslim refugees.

Musk has emerged as a leading advocate for Britain’s fringe political group, Restore Britain, praising its leader Rupert Lowe. This group recently published a 113-page policy outlining plans for the large-scale deportation of illegal immigrants, proposing the use of military aircraft to transport thousands to Rwanda, irrespective of their origin.

Musk’s growing interference in British politics has attracted criticism from not only the country’s Labour government but also from anti-hate organizations and other politicians.

Skip past newsletter promotions

Liberal Democrat leader Ed Davey stated on Thursday: “Elon Musk is intentionally using his platform to contaminate our politics and divide our nation. It’s high time our government acknowledges the threat he poses.”

Creating a Right-Wing Bubble Online

While active in global politics, Musk is also committed to building online avenues that promote his conservative ideology. His concerns about artificial intelligence becoming overly “politically correct” and its outputs being “woke” have led him to engage in numerous speculative discussions.

“If universal diversity mandates exist, straight white men would not survive. Consequently, you and I could be terminated by AI,” Musk remarked on Joe Rogan’s podcast last week, referencing the infamous “paperclip dilemma.” This theory postulates that if AI’s sole purpose was to create paperclips, it could annihilate humanity in the process.

Musk’s response to the AI coordination issue has been through his artificial intelligence venture, xAI. This has occupied much of his focus since he exited government, with aims to develop products that align more closely with his views. Theoretically, he envisions creating right-wing versions of popular platforms and products. However, these initiatives have often floundered. His efforts to construct a more conservative AI framework resulted in recent mishaps, including xAI’s Grok chatbot propagating conspiracy theories about “white genocide” and self-identifying as “Mecha-Hitler.”

In addition to communicating with his online allies, Musk has utilized social media to target nonprofits and political adversaries.

Last month, Musk was pivotal in a campaign against the Anti-Defamation League, the foremost Jewish advocacy organization in the U.S. The group faced scrutiny from the right due to an article linking extremist ties to slain conservative activist Charlie Kirk’s organization, Turning Point USA. Musk alleged that the ADL “hates Christians,” promotes violence, and amplified posts from right-wing figures criticizing the organization. This campaign led the ADL to retract its entire extremism glossary, which had been recognized as the most exhaustive resource on extremist organizations and movements.

Ahead of the New York mayoral race, Musk again leveraged his platform, using paid promotions to amplify posts denouncing Democratic candidate Zoran Mamdani and inundating users with his tweets. On election day, he shared a series of tweets that misrepresented the electoral process and hinted at a voting conspiracy.

As part of Musk’s efforts to establish an alternative informational ecosystem, xAI recently launched a Wikipedia alternative titled Grokipedia, which Musk proclaimed as superior and impartial. Researchers found that it contained substantial misinformation about notable individuals and events, replicated some entries directly from Wikipedia, and emphasized right-wing perspectives on slavery, immigration, and transgender rights.

For instance, Wikipedia describes Britain First as a “neo-fascist party,” while Grokipedia refers to it as a “patriotic party.”

Source: www.theguardian.com

Analyzing Post-Riot Behavior: Tracking Far-Right Radicalization Through 51,000 Facebook Messages

Over 1,100 individuals have faced charges related to the summer 2024 riots, with a small fraction being prosecuted for crimes associated with their online conduct.

Sentences varied from 12 weeks to seven years, igniting a surge of online backlash. The individuals behind the posts were varied; one notable case is that of I defended, who emerged as a cause célèbre and was labeled a “political prisoner.” Their posts were minimized and mischaracterized; their prosecution was framed as an infringement on free speech, despite the majority of online-related charges involving allegations of inciting racial hatred.

The posts did not predominantly surface in mainstream social media platforms like X, Instagram, or Facebook, but rather in niche online spaces commonly linked to fringe ideologies like Telegram, Parler, GetTr, 4Chan, and 8Kun. While many of these posts were on personal profiles, some appeared in public group forums.

This raised questions: What online communities did these individuals engage with, and who were their advocates? What type of content was circulating in these environments? It seemed that within these circles, views were so normalized that individuals felt emboldened to share content that was considered criminal by British authorities and the judiciary.

As a starting point, we utilized publicly accessible resources (police records and news reports) to track Facebook accounts of those implicated in previous investigations. Out of approximately 20 individuals charged with online offenses related to the summer 2024 riots, we followed five to three public Facebook groups. We also discovered visually similar or replicated posts defending those referenced in these groups.

This led to the mapping of a broader network of other Facebook groups, connected through shared memberships and group moderators and administrators.

In this exploration, we uncovered vibrant ecosystems characterized by a profound distrust of government and its institutions, alongside online communities preoccupied with anti-immigrant sentiments, naturalism, conspiracy theories, and misinformation.

Additionally, we found individuals who expressed genuine concerns about the society they belong to, alongside those who are deeply disillusioned and believe their freedom of expression is at risk.

Identification of Groups

Why focus on these groups?

Three groups were selected for the primary analysis because they included one or more current or former members charged in connection with the summer 2024 riots, or individuals involved in the riots who made comments either in person or online.

We established links between these and 13 additional groups, with all but three being public. These groups play significant roles, as moderators can oversee memberships, approve requests, and issue bans, with the authority to delete posts and comments. Administrators have even broader permissions, including the ability to modify group settings, update descriptions, and appoint additional moderators or administrators.

Which posts were analyzed?

To understand the type of content shared within these groups, we aimed to capture all posts made by the three largest groups from their inception until mid-May 2025.

We collected links and text from a total of 123,000 posts. However, due to the classification process (outlined below), the analysis was ultimately focused on 51,000 text-based posts.

What was the group membership size?

We did not record the names of individual group members (aside from moderators, administrators, and prominent posters). Therefore, when discussing combined memberships across groups, it is likely that individuals who belong to multiple groups were counted more than once.

Classification

First, we verified that the posts contained far-right content using established academic methods and categorized them through specific keywords indicating radicalization. We supplemented this with an AI tool that became available to data teams due to recent changes in editorial policies surrounding its journalistic use, classifying content as anti-establishment, anti-immigrant, migrant demonization, naturalism, and far-right identity/denial.

For categorizing the 51,000 social media posts, we employed ChatGPT 4.1 via OpenAI’s API. The prompts underwent rigorous testing across a random sample of 12 iterations, ensuring that at least two reviewers concurred until a consensus of over 90% was reached between the model and three human reviewers.

We are confident in the model’s reliability in small batches, supporting our broader evaluation based on a statistically determined sample of posts which achieved 93% agreement between human reviewers and the AI model.

The final analysis involved a statistically validated number of posts reviewed by the same annotator.

Testing concluded that the model performed exceptionally well, matching or even exceeding human reviewer consistency across most categories.

  • Accuracy (Percentage of correctly classified instances): 94.7%.

  • Precision (Percentage of correctly assigned true label counts by GPT): 79.5%.

  • Recall (Percentage of instances classified as true by humans and also classified as true by GPT): 86.1%.

  • F1 Score (A single percentage combining accuracy and recall, with higher values indicating better classification): 82.6%.

The model’s performance was evaluated by an internal statistical analyst, concluding its results were strong, benchmarked against similar academic studies.

Despite the model’s impressive performance, misclassifications in the analysis are inevitable.

We believe the classification process employing OpenAI’s API is thorough, transparent, defensible, and bolsters rigorous journalism.

Quick Guide

Please contact us about this story



show

The best public interest journalism relies on direct accounts from people of knowledge.

If you have anything to share about this subject, please contact us confidentially using the following methods:

Secure Messaging in Guardian App

The Guardian app includes a tool for sending story tips. Messages are end-to-end encrypted and concealed within routine operations on the Guardian mobile app, preventing any observer from realizing the communication.

If you haven’t yet downloaded the Guardian app, do so here (iOS/Android) and navigate to ‘Secure Messaging’ in the menu.

SecureDrop, Instant Messengers, Email, Phone, and Mail

If you can safely access the TOR network without being detected, you can send messages and documents to the Guardian through our <a href=\"https://www.theguardian.com/securedrop\">SecureDrop platform</a>.

Finally, our guide at <a href=\"https://www.theguardian.com/tips\">theguardian.com/tips</a> details various secure contact methods, outlining the pros and cons for each.


Illustration: Guardian Design / Rich Cousins

Thank you for your feedback.


Source: www.theguardian.com

Musk’s Grok AI Bot Misidentifies Footage of Police Misconduct at London Far-Right Rally

The metropolitan police were required to address the inaccurate claims generated by artificial intelligence on Elon Musk’s X platform. As a result, they released footage from the far-right rally that took place in the city since 2020.

Chatbot Grok claimed to provide answers to users on X about the location and timing of police footage depicting clashes with the crowd.


Despite Grok’s history of providing inaccurate information, it was noted that “the footage appears to show a confrontation between police and protesters over restrictions on September 26, 2020, during an anti-lockdown demonstration at Trafalgar Square in London.”

The response was quickly amplified on X, with Daily Telegraph columnist Allison Pearson tweeting, “This aligns with my suspicions.”

The Met responded to her, clarifying that the footage was captured before 3pm at the junction of Whitehall and Horseguard Avenue.

“It is clearly not Trafalgar Square, as suggested by the AI response you referenced. To eliminate confusion, we provided labeling comparisons to verify the location,” the force added.

This exchange illustrates the challenges police face from social media platforms, occurring on a day when 26 officers sustained injuries amid violence. Elon Musk was present at a rally organized by far-right activists affiliated with Tommy Robinson.

Musk faced criticism for his remarks, which were conveyed to Robinson via live link. The billionaire told the audience, “violence is coming,” asserting, “You will either fight back or perish.”

Liberal Democratic leader Ed Davy stated: “Elon Musk incited violence on our streets yesterday. I hope that politicians from all parties unite in denouncing his deeply dangerous and irresponsible rhetoric.”

When queried by the BBC on Sunday about whether a tech billionaire was trying to provoke violence, Business Secretary Peter Kyle commented:

Grok is a creation of Musk’s AI company Xai and is accessible to users on Musk’s social media platform, X. Users can pose questions on X by tagging “@grok”, prompting the chatbot to respond.

Skip past newsletter promotions

Previously, Grok mentioned South Africa’s “white genocide” in unrelated discussions.

This idea stems from a far-right conspiracy theory, which has gained traction in mainstream discourse, with figures like Musk and Tucker Carlson often referenced.

Musk is a prominent supporter of Robinson and has significantly contributed to reviving the narrative regarding gangs that groomed and assaulted girls in the UK for years. Last year, Downing Street rebuked Musk for his comments on X, where he posted that “civil wars are inevitable” alongside footage of violent riots in Liverpool.

X was contacted for a statement regarding Grok’s misleading information related to Saturday’s footage.

Quick Guide

Please contact us about this story






show


The best public interest journalism relies on firsthand accounts from knowledgeable individuals.

If you have any information to share about this topic, please contact us confidentially via the following methods:

Secure Messaging in the Guardian App

The Guardian app features a tool to submit story tips, with messages encrypted end-to-end and integrated into the regular activities of Guardian mobile applications, ensuring your communications remain private.

If you don’t have the Guardian app, please download it (iOS/Android) and navigate to the menu. Select Secure Messaging.

SecureDrop, Instant Messengers, Email, Telephone and Post

If you can safely use the TOR network without detection, you may send messages and documents to the Guardian via our SecureDrop platform.

Lastly, the guide at theguardian.com/tips provides various secure contact methods, outlining the advantages and disadvantages of each.


Illustration: Guardian Design / Rich Cousins

Thank you for your feedback.


Source: www.theguardian.com

Elon Musk Urges Parliament Dissolution at London Far-Right Rally

Elon Musk advocated for the “dissolution of Parliament” and a “change in government” in the UK during a London “Unity” rally organized by far-right activist Stephen Yexley Lennon, commonly known as Tommy Robinson.


Musk, the proprietor of X, connected through video links, as thousands listened to Robinson oppose the “woke mind virus,” asserting, “violence is coming” and cautioned the audience, “you’ll fight back or die.”

He remarked: “I firmly believe there needs to be a change in the UK government. I can’t. There can’t be another four years.”

“We must take action. The Assembly needs to be dissolved, and a new election must occur.”

This isn’t Musk’s first foray into British politics. He previously engaged in a verbal clash with the UK government regarding grooming gangs and criticized the 2023 online safety law, denouncing it as a threat to free speech.

Although he shared a warm rapport with Nigel Farage, rumors swirled that he might lead a donation to Farage’s party before calling for reform within British leadership amid the controversy surrounding Robinson’s support.

Musk addressed the crowd in central London:




Aviation footage shows the scale of the rally “Unifying the Kingdom” – Video


“This is directed at the rational center, those who don’t usually engage in politics but simply seek to live their lives.

“My message aligns with them. If this persists, violence will reach you, leaving you with no choice. You are in a critical situation here.

“Whether you decide to resort to violence or not, it is inevitable. You either resist or perish; that’s the reality.”




With Katie Hopkins and Tommy Robinson at the “Kingdom” rally in central London on Saturday. Photo: Lucy North/Pennsylvania

Musk further asserted, “The left is the murder party,” referencing the death of Charlie Kirk.

He stated: “There is a tremendous amount of violence from the left. My friend Charlie Kirk was murdered in cold blood this week, and those from the left are openly celebrating it. The left embodies a party of murder and revels in killing.”

Additionally, he criticized what he termed the Awakening Mind Virus, asserting that merit should dictate progress, not “discrimination based on gender, religion, race, etc.”




Flares are launched as police attempt to control the crowd at the rally. Photo: Tayfun Sarcı/EPA

He remarked: “Many awakening movements are inherently super racist, super sexist, and often anti-religious. Why is there such unfairness against anti-Christians? It’s unjust… it’s all aspects of awakening, and I’m labeling it; it’s fundamentally contradictory.”

Attendance was estimated to exceed 110,000 individuals at what is regarded as one of the largest nationalist gatherings in decades. Marchers encountered approximately 5,000 anti-racist counter-protesters.

Along with Musk, figures like Katie Hopkins and French far-right politician Eric Zemmour were also present as speakers at the event.

PA Media contributed to this report

Source: www.theguardian.com

One Progressive Takes on Twenty Far-Right Conservatives: Medi Hasan Reflects | YouTube

mEhdi Hasan was acutely aware of his viral status. The broadcaster and author watched the views surge on YouTube, with his phone buzzing incessantly. However, it truly hit him when, at an event in Washington, someone approached him in Urdu, saying, “I saw you in 20 insanity.”


The individual referenced Hasan’s appearance in the British-American commentator segment. It’s surrounded the gladiators’ web series “1-Many Debate” hosted by Jubilee Media on YouTube. In the episode “1 Progressive vs 20 Far Right Conservatives,” Hasan was questioned about his “ethnic background” by a man whose Guardian was not masking as the organizer of two violent far-right protests. Laughter erupted in the debate when another participant concurred that he was a fascist.

“I saw the vast audience engaged with the youth. I thought it was a good platform,” says Hasan, who launched his own alternative news outlet, Zeteo, last year. “But it was really intense, something I hadn’t anticipated. It was extraordinary, for both positive and negative reasons.”


Hasan’s nearly two-hour discussion, which has been edited and repackaged for continuous redistribution, propelled Jubilee Media into the forefront of mainstream awareness, igniting dialogues about the political and social ramifications of new media formats, alongside various existential uncertainties.

The quickly expanding entertainment company, launched in 2017, captured the attention of youth by transforming Trump’s contentious debates into highly engaging content. Besides debates, it has also developed games and dating shows, yet is facing challenges. Its standout format pits one expert against another on a single chair to debate pressing political issues.

Few raised concerns about traditional broadcasters’ encroachment, with titles like “Flat Earth and Scientists: Can You Trust Science?” (31 million views) and “Can 25 Liberal College Students Betray One Conservative? (feat. Charlie Kirk)” (30 million views). The 2024 Video with Ben Shapiro, featuring a trans man confronting right-wing critics in Four minutes of outrage, was the fifth most viewed election-related content on YouTube.

Founder and CEO Jason Yi Lee established Jubilee in 2010 as a nonprofit after his video Basking for Charity went viral. He mentioned to Variety that the organization “aims to illustrate what discourse appears like and should look like.” He envisioned it as potentially “Disney for empathy.” But how does the combative nature of those performances align with the goal of “encouraging understanding and building human connections”?

Spencer Colnharbor, who comments on Atlantic popular culture, perceives idealism as genuine but fueled by ambition. “In Jubilee’s context, empathy defends voyeurism and a curiosity about others,” Colnharbor reflects. “Lee didn’t aspire to be the new UN. He aims to be Disney, a prominently recognized for-profit entertainment entity known for its capacity to commercialize anything and spawn franchises.”

Julia Alexander, a media correspondent for Puck News, noted Jubilee’s advantage from the rise of free speech absolutism and the internet’s shift toward social and video platforms. Yet, she asserts that while it may have initially aimed to alter negativity in discussions, the platform has succumbed to “the hateful vitriol that defines many social media.” She claims they have little hope against “the trivial yet understood currency of the Internet.” Essentially, contentious and alarming content generates more interaction than constructive dialogue.

“I hope they choose to concentrate on generating positive internet content. We surely need it,” she remarks. “Yet, I worry as they are compelled to scale continuously and surpass previous performances, leading to a tendency to produce even more extreme content.”


Hasan, also a contributor to the Guardian, recognizes the allure of the more extreme videos produced by Jubilee. He authored “Win Every Argument,” a book on the art of debate, arguing that traditional media has vacated the battlefield, allowing platforms like YouTube to fill that void.

“Mainstream media performed poorly in facilitating discussion and debate. They gave a voice to those with unorthodox perspectives,” he comments. “But I believe there is a balance between extremes. There are no standards when it comes to censorship versus narrowing opinions. There are no guardrails; as long as you’re clicking, you can post whatever you want on YouTube.”

He acknowledges some criticisms he has received for his engagements, even agreeing with aspects of them. Author and disability rights advocate Imani Barbarin pointed out that Hasan’s takedown clips of far-right militants were shared by progressives celebrating his “victories,” while equally substantial numbers were shared as proof of his failures. “We live in a memetic culture of politics,” Barbarin stated in a post on x. “These moments are literally extracted from space and time. […] The surrounding context of that moment becomes irrelevant.”

Hasan expressed that if he has any regrets, it would be not knowing more about the individuals he faced and failing to recognize the presence of extremists among them. As for overall regret regarding his participation, he contemplates.

“I stand by what I said. I believe I performed adequately in the debate,” he reflects. “The broader question remains: is the format itself problematic? Are these arguments worth making? And I’m uncertain of the answer. Ask me again in five years.”

Source: www.theguardian.com

Warning: Far-Right Extremists Using Gaming Platforms to Radically Influence Teens

The report indicates that far-right extremists are leveraging livestream gaming platforms to recruit and radicalize teenagers.

Recent research published in the journal Frontiers of Psychology reveals how various extremist groups are utilizing chats and live streams during video games to attract and radicalize mainly young men and vulnerable users.

The UK counter-crime and terrorism agency is urging parents to remain vigilant as online criminals specifically target youth during the summer break.

In an unprecedented step, last week, the counter-terrorism police, MI5, and the National Crime Agency issued a joint alert to parents and guardians that online perpetrators would “exploit school holidays to engage in criminal activities with young people when they know that less support is readily available.”

Dr. William Allshan, a senior researcher at the Institute for International Police and Public Conservation at Anglia Ruskin University, who conducted this study with her colleague Dr. Elisa Orofino, stated that the “game adjacency” platform is being used as a “digital playground” for extremist activities.


AllChorn has found that extremists have intentionally redirected teenagers from mainstream social media platforms to these gaming sites.

The most prevalent ideology among extremist users was far-right, which glorifies extreme violence and shares content related to school shootings.

Felix Winter, who threatened to execute a mass shooting at a school in Edinburgh on Tuesday, was sentenced to six years after the court revealed that the 18-year-old had been “radicated” online and spent over 1,000 hours interacting with a pro-Nazi group.

AllChorn noted a significant increase in coordinated efforts by far-right groups like patriotic alternatives to recruit youth through gaming events that arose during the lockdown. However, since that time, individuals have been concealing themselves in public groups or channels on Facebook and Discord, as many extremist factions have been pushed out of mainstream platforms.

He further explained that younger users might gravitate towards extreme content for its shock value among peers, which could render them susceptible to being targeted.

Extremists have had to adapt their methods, as most platforms have banned them, Allchorn said. “We consulted with local community safety teams, and they emphasized the importance of building trust rather than overtly promoting ideologies.”

This research was also deliberated upon with moderators. Moderators expressed concerns regarding inconsistent enforcement policies on the platforms and their burden of deciding whether to report certain content or users to law enforcement.

While in-game chats are not specifically moderated, moderators reported being overwhelmed by the sheer volume and complexity of harmful content, including the use of coded symbols to bypass automated moderation tools.

Allchorn emphasized the importance of digital literacy for parents and law enforcement so they may better grasp how these platforms and their subcultures function.

Last October, MI5’s head Ken McCallum revealed that “13% of all individuals being investigated by MI5 for terrorism-related activities in the UK are under the age of 18.”

AI tools are employed to assist in moderation but often struggle with interpreting memes or when language is unclear or sarcastic.

Source: www.theguardian.com

“My AI Voice was Cloned and Used by the Far-Right. Can I do anything to stop it?” – Georgina Findlay

M
My brother put the cell phone to my ear. “You’re going to think this is creepy,” he warned. Ann
instagram reels
The footage, which showed teenage boys attending the rally, included a news broadcast-style narration. “The recent protests by British students have become a powerful symbol of the deepening crisis in Britain's education system,” she said in a soft, female voice with barely a hint of a Manchenian accent. I opened my eyes wide and sat up straight.

As a presenter on a YouTube news channel, I was used to hearing my voice on screen. But this wasn't me – even if that voice said so.
definitely mine.
“They force us to learn about Islam and Muhammad in school,” he continued. “Listen, this is disgusting.” It was horrifying to hear my voice being associated with far-right propaganda, but more than that, I was horrified to hear how this fraud is being perpetrated. As I dug deeper, I learned how far-reaching the effects of false voices can be.

AI voice cloning is an emerging form of audio “deepfake” and the third fastest growing form
Scam of 2024.
Unwitting victims find that their voices have been cleverly duplicated without their consent or even knowledge, a phenomenon that has already led to bank security checks.
bypassed and people
deceived He had a stranger he believed to be a relative send money to him. My brother was sent the clip by a friend who recognized my voice.

After some research, I was able to find a far-right YouTube channel with about 200,000 subscribers. Although this was said to be an American channel, many of the misspellings in the video were typical of misinformation accounts from non-native English speakers. I was shocked to learn that my voice was featured in 8 of the channel's 12 most recent videos. I scrolled back and found one video using my voice from 5 months ago.
10m views.
The voice was almost the same as mine. The voice was AI-generated, except the pace of my speech was a little odd.


This increasing sophistication of AI voice cloning software is a cause for serious concern. In November 2023, an audio deepfake of London Mayor Sadiq Khan allegedly making inflammatory remarks about Armistice Day was widely circulated on social media. The clip almost caused a “serious injury”;
Mr Khan told the BBC..
“If you're looking to sow disharmony and cause trouble, there's no better time.” At a time when confidence in Britain's political system is already at record levels.
lowThe ability to manipulate public rhetoric is more harmful than ever, with 58% of Britons saying they have “little trust” in politicians to tell the truth.

The legal right to own one's voice falls within a vague gray area of ​​poorly legalized AI issues. TV naturalist David Attenborough became the center of an AI voice cloning scandal in November. He said he was “deeply disturbed” to learn that his voice was being used to deliver partisan breaking news in the United States. In May, actor Scarlett Johansson sued OpenAI for using a text-to-speech model in ChatGPT, an OpenAI product, that Johansson described as “eerily similar” to her own voice. There was a collision.

In March 2024, OpenAI postponed the release of a new voice replication tool, deeming it “too risky” to make it publicly available in a year with a record number of global elections. Some AI startups that let users clone their own voices can detect the creation of voice clones that imitate politicians actively involved in election campaigns, including in the US and UK. We have a preventive policy in place.

However, these mitigation measures are not enough. In the United States, concerned senators are proposing legislation to crack down on those who copy audio without consent. In Europe, the European Identity Theft Surveillance System (Aitos) has developed four tools to help police identify deepfakes, with plans to have them ready by the end of this year. But tackling the audio crisis is no easy task. Dr Dominic Rees, an expert on AI in film and television who advises a UK parliamentary committee, told the Guardian: “Our privacy and copyright laws are not prepared for what this new technology will bring.”

If declining trust within organizations is one problem, creeping distrust among communities is another. The ability to trust is central to human cooperation as globalization advances and personal and professional lives become increasingly intertwined, but we have never come to the point of undermining it to this extent. Hany Farid, a professor of digital forensics at the University of California, Berkeley and an expert on deepfake detection, said:
told the Washington Post The consequences of this voice crisis could be as extreme as mass violence or “election theft.”

Is there any benefit to this new ability to easily clone audio? Maybe. AI voice clones could allow people to seek solace by connecting with the dead
loved ones
or help give a voice to people who:
medical condition. American actor
val kilmerhas been undergoing treatment for throat cancer, and returned to “Top Gun: Maverick'' in 2022 with a voice restored by AI. Our ability to innovate may serve those with evil intentions, but it also serves those working for good.

When I became a presenter, I happily shared my voice on screen, but I did not agree to sign on to anyone who wanted to use this essential and precious part of me. As broadcasters, we sometimes worry about how colds and winter viruses will affect our recordings. But my recent experience has given the concept of losing one's voice a different, far more sinister meaning.

Source: www.theguardian.com

How social media fueled far-right riots in the UK: The role of the polarisation engine

The 1996 Dunblane massacre and the protests that followed were Textbook example of how an act of terrorism mobilized a nation to demand effective gun control.

The atrocity, in which 16 children and a teacher were killed, triggered a wave of nationwide backlash, and within weeks 750,000 people had signed a petition calling for legal reform. Within a year and a half, new laws were in place making it illegal to own handguns.

Nearly three decades after the horrific violence at a Southport dance studio, it has provoked a starkly different response. It shocked many in the UK this week, but experts on domestic extremism, particularly those who look at the intersection of violence and technology, say it’s all too common — and, in this new age of algorithmic rage, sadly inevitable.

“Radicalization has always happened, but before, leaders were the bridge-builders that brought people together,” said Maria Ressa, a Filipino journalist and sharp-tongued technology critic who won the 2021 Nobel Peace Prize. “That’s no longer possible, because what once radicalized extremists and terrorists now radicalizes the general public, because that’s how the information ecosystem is designed.”

For Ressa, all of the violence that erupted on the streets of Southport, and then in towns across the country, fuelled by wild rumours and anti-immigrant rhetoric on social media, felt all too familiar. “Propaganda has always been there, violence has always been there, it’s social media that has made violence mainstream. [The US Capitol attack on] January 6th is a perfect example. Without social media to bring people together, isolate them, and incite them even more, people would never have been able to find each other.”

The biggest difference between the Dunblane massacre in 1996 and today is that the way we communicate has fundamentally changed. In our instant information environment, informed by algorithms that spread the most shocking, outrageous or emotional comments, social media is designed to do the exact opposite of bringing unity: it has become an engine of polarization.

“It seemed like it was just a matter of time before something like this happened in the UK,” says Julia Ebner, head of the Violent Extremism Lab at the Oxford University Centre for Social Cohesion Research. “This alternative information ecosystem is fuelling these narratives. We saw that in the Chemnitz riots in Germany in 2018, which reminded me strongly of that. And [it] The January 6th riots occurred in the United States.

“You see this chain reaction with these alternative news channels. Misinformation can spread very quickly and mobilize people into the streets. And then, of course, people tend to turn to violence because it amplifies anger and deep emotions. And then it travels from these alternative media to X and mainstream social media platforms.”

This “alternative information ecosystem” includes platforms like Telegram, BitTortoise, Parler and Gab, and often operates unseen behind the scenes of mainstream and social media. It has proven to be a breeding ground for the far-right, conspiracy theories and extremist ideology that has collided this week and mobilized people into the streets.

“Politicians need to stop using the phrase ‘the real world’ instead of ‘the online world,'” Ressa said. “How many times do I have to say it? It’s the same old thing.”

A burnt-out car has been removed after a night of violent anti-immigration protests in Sunderland. Photo: Holly Adams/Reuters

For Jacob Davey, director of counter-hate policy and research at the Institute for Strategic Dialogue in London, it was a “catastrophe”: Recent mass protests in the UK have emboldened the far-right, with far-right figures like Tommy Robinson being “replatformed” on X, while measures to curb hate are being rolled back.

The problem is that even though academics, researchers and policymakers are increasingly understanding the issue, very little is being done to solve it.

“And every year that goes by without this issue being addressed and without real legislation on social media, it’s going to get significantly worse,” Ressa said. “And [Soviet leader] Yuri Andropov said: Design Information [disinformation] “It’s like cocaine. Once or twice it’s okay, but if you take it all the time it becomes addictive. It changes you as a person.”

However, while UK authorities are aware of these threats in theory, in 2021 MI5 Director Ken McCallumsaid far-right extremism was the biggest domestic terrorism threat facing the UK, but the underlying technical problems remain unresolved.

Skip Newsletter Promotions

It’s seven years since the FBI and US Congress launched an investigation into the weaponisation of social media by the Russian government, and while much of the UK’s right-wing media has ignored or mocked the investigation, Daily Mail This week, a shocking headline was published about one suspicious account on X. The account may be based in Russia and may be spreading false information, but this may only be part of the picture.

And there is still little recognition that what we are witnessing is part of a global phenomenon — a rise in populism and authoritarianism underpinned by deeper structural changes in communication — or, according to Ebner, the extent to which the parallels with what is happening in other countries run deep.

“The rise of far-right politics is very similar across the world and in different countries. No other movement has been able to amplify their ideology in the same way. The far-right is tapping into really powerful emotions in terms of algorithmically powerful emotions: anger, indignation, fear, surprise.”

“And really what we’re seeing is a sense of collective learning within far-right communities in many different countries. And a lot of it has to do with building these alternative information ecosystems and using them to be able to react or respond to something immediately.”

The question is, what will Keir Starmer do? Ebner points out that this is no longer a problem in dark corners of the internet. Politicians are also part of the radicalised population. “They are now saying things they would not have said before, they are blowing dog whistles to the far right, they are playing with conspiracy theories that were once promoted by far-right extremists.”

And human rights groups such as Big Brother Watch fear that some of Starmer’s solutions – including a pledge to increase facial recognition systems – could lead to further harm from the technology.

Ravi Naik, of AWO, a law firm specialising in cases against technology companies, said there were a number of steps that could be taken, including the Information Commissioner’s Office enforcing data restrictions and police action against incitement to violence.

“But these actions are reactive,” Naik said. “The problem is too big to be addressed at the whim of a new prime minister. It is a deep-rooted issue of power, and it cannot be solved in the middle of a crisis or by impulsive reactions. We need a real adult conversation about digital technology and the future we all want.”

Source: www.theguardian.com

Far-right violence in the UK fueled by TikTok bots and AI

and othersLess than three hours after the stabbing that left three children dead on Monday, an AI-generated image was shared on X by the account “Europe Invasion.” The image shows bearded men in traditional Islamic garb standing outside Parliament Building, one of them brandishing a knife, with a crying child behind them wearing a Union Jack T-shirt.

The tweet has since been viewed 900,000 times and was shared by one of the accounts most prolific in spreading misinformation about the Southport stabbing, with the caption “We must protect our children!”.

AI technology has been used for other purposes too – for example, an anti-immigration Facebook group generated images of large crowds gathering at the Cenotaph in Middlesbrough to encourage people to attend a rally there.

Platforms such as Suno, which employs AI to generate music including vocals and instruments, have been used to create online songs combining references to Southport with xenophobic content, including one titled “Southport Saga”, with an AI female voice singing lyrics such as “we'll hunt them down somehow”.


Experts warn that with new tactics and new ways of organizing, Britain's fragmented far-right is seeking to unite in the wake of the Southport attack and reassert its presence on the streets.

The violence across the country has led to a surge in activism not seen in years, with more than 10 protests being promoted on social media platforms including X, TikTok and Facebook.

This week, a far-right group's Telegram channel has also received death threats against the British Prime Minister, incitements to attacks on government facilities and extreme anti-Semitic comments.

Amid fears of widespread violence, a leading counter-extremism think tank has warned that the far-right risks mobilizing on a scale not seen since the English Defence League (EDL) took to the streets in the 2010s.

The emergence of easily accessible AI tools, which extremists have used to create a range of material from inflammatory images to songs and music, adds a new dimension.

Andrew Rogojski, director of the University of Surrey's Human-Centred AI Institute, said advances in AI, such as image-generation tools now widely available online, mean “anyone can make anything”.

He added: “The ability for anyone to create powerful images using generative AI is of great concern, and the onus then shifts to providers of such AI models to enforce the guardrails built into their models to make it harder to create such images.”

Joe Mulhall, research director at campaign group Hope Not Hate, said the use of AI-generated material was still in its early stages, but it reflected growing overlap and collaboration between different individuals and groups online.

While far-right organizations such as Britain First and Patriotic Alternative remain at the forefront of mobilization and agitation, the presence of a range of individuals not affiliated to any particular group is equally important.

“These are made up of thousands of individuals who, outside of traditional organizational structures, donate small amounts of time and sometimes money to work together toward a common political goal,” Mulhall said. “These movements do not have formal leaders, but rather figureheads who are often drawn from among far-right social media 'influencers.'”

Joe Ondrack, a senior analyst at British disinformation monitoring company Logical, said the hashtag #enoughisenough has been used by some right-wing influencers to promote the protests.

“What's important to note is how this phrase and hashtag has been used in previous anti-immigration protests,” he said.

The use of bots was also highlighted by analysts, with Tech Against Terrorism, an initiative launched by a branch of the United Nations, citing a TikTok account that first began posting content after Monday's Southport attack.

“All of the posts were Southport-related and most called for protests near the site of the attack on July 30th. Despite having no previous content, the Southport-related posts garnered a cumulative total of over 57,000 views on TikTok alone within a few hours,” the spokesperson said. “This suggests that a bot network was actively promoting this content.”

At the heart of the group of individuals and groups surrounding far-right activist Tommy Robinson, who fled the country ahead of a court hearing earlier this week, are Laurence Fox, the actor turned right-wing activist who has been spreading misinformation in recent days, and conspiracy websites such as Unity News Network (UNN).

On a Telegram channel run by UNN, a largely unmoderated messaging platform, some commentators rejoiced at the violence seen outside Downing Street on Wednesday. “I hope they burn it down,” one commentator said. Another called for the hanging of Prime Minister Keir Starmer, saying “Starmer needs Mussalini.” [sic] process.”

Among those on the scene during the Southport riots were activists from Patriotic Alternative, one of the fastest growing far-right groups in recent times. Other groups, including those split over positions on conflicts such as the Ukraine war and the Israeli war, are also seeking to get involved.

Dr Tim Squirrell, director of communications at the counter-extremism think tank the Institute for Strategic Dialogue, said the far-right had been seeking ways to rally in the streets over the past year, including on Armistice Day and at screenings of Robinson's film.

“This is an extremely dangerous situation, exacerbated by one of the worst online information environments in recent memory,” he said.

“Robinson remains one of the UK far-right's most effective organizers, but we are also seeing a rise in accounts large and small that have no qualms about aggregating news articles and spreading unverified information that appeals to anti-immigrant and anti-Muslim sentiment.”

“There is a risk that this moment will be used to spark street protests similar to those in the 2010s.”

Source: www.theguardian.com

The Gaming Industry Must Take a Stand Against Far-Right Trolls, 10 Years After Gamergate

T A few years ago, a game developer’s tormented ex-boyfriend published a vindictive article accusing her of trading sex to get positive reviews for her indie game. This took her to 4chan, the most disgusting corner of the internet in 2014, and a harassment campaign began, targeting all women working in video game development and gaming press, as well as her LGBTQ+ community in the industry. It has spread to. Sensing the bloodshed, his YouTube “alt-right” provocateurs and Steve Bannon’s Breitbart jumped on the bandwagon and quickly took control. And once this fabricated outrage became known, Gamergate mutated into one of the first front lines of modern society. A culture war sparked by social media, misogyny, and weaponized youth grievances. Many of those tactics became part of President Trump’s campaign strategy.

This week, 16 narrative design studios found themselves at the center of a conspiracy theory that holds them responsible for an insidious epidemic of “funny behavior” in modern video games. The group, which has more than 200,000 followers on the PC game store Steam and thousands of followers on its Discord chat channel, is the group that Sweet Baby Inc. has asked game developers to change the physical appearance, ethnicity, and They believe it is secretly forcing them to change their sexuality to fit the “woke world.” ideology. They believe that Sweet Baby has secretly created and controlled nearly every popular video game of the past five years, keeping straight white men out. As President Trump heads to the campaign trail again, this is part of a broader far-right panic about diversity and inclusion, resulting in regressive anti-women and anti-woke bills already being proposed in the US and other countries. is being brought about.




Pride Support … Marvel’s Spider-Man 2. Photo provided by: Sony Computer Entertainment

Of course, the agency in question has done nothing of the sort. This is just a story development studio, the equivalent of a video game script doctor, working with game developers to make sure the plot makes sense and the characters aren’t embarrassingly disconnected. The consultancy’s mission is to “make games more engaging, more fun, more meaningful, and more inclusive.” For example, developers can’t dictate that a game feature a black female protagonist. I don’t have the power to dictate anything. But employees still bear the brunt of the online mob’s wrath. They are anonymized, threatened and abused online.

Ten years ago, it was female gaming journalists and critics who were at the forefront of the firefight. This time I’m a narrative designer. But the conspiracy theorists’ message is the same. There is no diversity in the game. If you are a woman, gay, or person of color working in this industry, you should expect the worst.

Nathan Grayson aftermath and Alyssa Mercante Kotaku They investigated the origins and spread of the Sweet Baby conspiracy theory. Its supporters paint a picture of the consultancy’s ludicrous ties to BlackRock and a funding crisis affecting the gaming industry as a whole. This is not the first time since Gamergate that this kind of harassment has spread. Depressingly, systematic mistreatment of game developers has now become somewhat commonplace, especially when they do something as bold as incorporating a Pride flag into Spider-Man’s Manhattan or taking the time. Masu. Implementing MOD support For Baldur’s Gate 3. All his 91% of developers investigated Last year’s Game Developers Conference said player harassment was a problem, with 42% calling it a “very serious” problem.

When Gamergate was happening, the silence of much of the video game industry was deafening. Instead of coming to the defense of those targeted, nearly everyone who wasn’t directly attacked by the Gamergate mob tried to stick their fingers in their ears and pretend nothing was happening. Media publishers, game developers, and publishers alike are motivated by fear of making the situation worse and alienating what they fear is a significant portion of their audience. As a result, women were unable to speak up in defense of women until it was too late. not at all. IGN was the most popular gaming website in the world at the time. published A surprisingly weak movement of bipartisanship about “recent unpleasant events,” one could not even call the movement by name.

The situation did not subside because the gaming industry did not have a decisive voice. Inaction did not deter the mob. Those who have been harassed in some cases and forced out of their homes or workplaces have simply been left feeling alone, enraged, and often fearful. The main targets at the time were female developers, journalists, and commentators. This is a gathering of narrative consultants.

In the decade since Gamergate, the culture wars instigated on gamer forums have spread and contaminated nearly every aspect of our lives. The last decade has taught us that these people aren’t going away. There may always be people who believe that the mere presence of women and minorities in video games, Star Wars, or the halls of cultural and political power is meaningless. This is an insult and a symptom of the “woke virus.”




“Alan Wake 2” developer Remedy Entertainment has denied accusations that story production company Sweet Baby ensured the main character would be a black woman. Photo courtesy of Remedy Entertainment

But we also learned that ignoring them doesn’t help. That will only make the situation worse. The people who work at Sweet Baby shouldn’t be left to suffer because of the studio that employs them. Independent developers are getting braver in speaking out on social media these days: ‘Alan Wake 2’ director Posted A conspiracy theory that Sweet Baby forced developers to change the ethnicity of its characters is “absolutely not true”. and Mary Kenny, associate director of Marvel’s Spider-Man developer Insomniac Games. tweeted a strong denial. But companies themselves need to follow suit. Publishers and developers who have worked with Sweet Baby Inc include Warner Bros. Games and PlayStation’s Santa Monica Studios. Where can I find their support? Are they going to publicly protect those who contributed to the multi-million dollar game from false accusations, or are they going to let the trolls control the narrative?

No one is forcing diversity into video games. It’s happening naturally as players and developers themselves become more diverse. Gamergate didn’t blackmail women out of video games ten years ago, and we won’t be blackmailed now. The gaming industry knows that, no matter what some struggling gamers think, a wider range of content, made with contributions from a wider range of people and featuring a wider range of characters, is good for creativity and good for business. Now we must make that support fully and clearly articulated.

Source: www.theguardian.com