Prevention League Triumphs in Extremism Research as Musk Champions Right-Wing Opposition

The Prevention League, a leading Jewish advocacy and anti-hate organization in the nation, has removed over 1,000 pages of extremism research from its website after facing significant backlash from right-wing influencers and Elon Musk on Tuesday night.

The now-deleted “extremist glossary” from the ADL included more than 1,000 entries offering background information on various groups and ideologies associated with racism, anti-Semitism, and other forms of hate. The section dedicated to neo-Nazi groups, militias, and anti-Semitic conspiracies has been redirected to a landing page featuring its extremism research.

Musk and various right-wing accounts on X have recently targeted the ADL over this glossary, which included references to Turning Point USA, associated with the late far-right activist Charlie Kirk. Musk responded to a post on X, criticizing the group for its entries on Christian identity and mistakenly conflating the militant movement with Christianity as a whole. In truth, the term refers to a faction that advocates for racial jihadism against Jews and other minorities.

The ADL did not directly address the backlash in its statements regarding this decision, instead arguing that removing the glossary would enable organizations to “explore new strategies and creative approaches to present data and research more effectively.”

“With over 1,000 entries compiled over the years, the extremist glossary has been a valuable resource for high-level information across a broad array of topics. However, the increase in entries has rendered many outdated,” stated the ADL. “We have observed many entries that have been intentionally misrepresented and misused. Furthermore, experts continue to develop more comprehensive resources and innovative means to convey information on anti-Semitism, extremism, and hatred.”

The decision to remove the glossary comes amid intense criticism faced by the ADL from staff and researchers, particularly concerning Israeli policies and its narrow focus on Musk’s repeated defenses. The organization lost a donor, and a prominent executive resigned following a statement by CEO Jonathan Greenblatt, who has praised Musk.

The ADL has not addressed inquiries regarding the comprehensive resources mentioned in its statement. The glossary was launched in 2022 and marketed as the first database designed to aid the media, the public, and law enforcement in understanding extremist groups and their ideologies.

“We consider it the most extensive and user-friendly resource for extremist speech currently accessible to the public,” noted Oren Segal, senior vice president of the ADL Center, in a prior statement. “We believe an informed public is crucial for the defense of democracy.”

ADL pages that contained the 2022 press release now display a message stating, “You are not permitted to access this page.”

Musk has long targeted the ADL, previously threatening to sue the organization for its research documenting the rise of anti-Semitic content on social media platforms. However, the ADL and Greenblatt defended him earlier this year, but after other Jewish groups and lawmakers condemned Musk for a fascist-style salute following Donald Trump’s inauguration. The ADL referred to it as “an unfortunate gesture amid moments of enthusiasm.”

Skip past newsletter promotions

Musk has consistently tweeted about the glossary’s ADL entries, including those related to Kirk’s TPUSA, labeling the ADL a “hate group” and insinuating that it incites murder. The TPUSA entry did not label the organization as extremist but included a list of its leadership and activists linked to extremists or who have made “racist or biased statements.”

On Wednesday, Musk continued to focus on the ADL, reiterating his classification of it as a “hate group.” He also aligned with another right-wing pressure effort, making a call to boycott Netflix due to a show featuring trans characters.

Source: www.theguardian.com

Experts Warn AI Chatbot ‘Mechahitler’ Could Interpret Content as Violent Extremism in XV eSafety Case

The Australian judiciary has been dubbed “Mecha Hitler” after discussions last week about the classification of anti-Semitic remarks as terrorist and violent extremist content, with chatbots producing such comments also coming under scrutiny.

Nevertheless, experts from X contend that large-scale language models lack intent, placing accountability solely on the users.

Musk’s AI firm, Xai, issued an apology last week regarding statements made by the Grok chatbot over a span of 16 hours, attributing the issue to “deprecated code” that became more influenced by existing posts from X users.

The uproar centered around an administrative review hearing on Tuesday, where X contested a notice from Esafety Commissioner Julie Inman Grant issued last March, demanding clarity on actions against terrorist and violent extremism (TVE) content.


The ban on social media in Australia for those under 16 is now law, with numerous uncertainties still remaining – Video


Chris Berg, an expert witness from X and a professor at RMIT Economics, testified that it is a misconception to believe a large-scale language model can inherently produce this type of content, as it plays a critical role in defining what constitutes terrorism and violent extremism.

Contrarily, Nicolas Suzor, a law professor at Queensland Institute of Technology and one of Esafety’s expert witnesses, disagreed with Berg, asserting that chatbots and AI generators can indeed contribute to the creation of synthetic TVE content.

“This week alone, X’s Grok generated content that aligns with the definition of TVE,” Suzor stated.

He emphasized that AI development retains human influence, which can mask intentions, affecting how Grok responds to inquiries aimed at “quelling awareness.”

The court heard that X believes its Community Notes feature, which allows user contributions to fact-checking, along with Grok’s analytics feature, aids in identifying and addressing TVE material.

Skip past newsletter promotions

Josh Roose, a witness and political professor at Deakin University, expressed skepticism regarding the utility of community notes in this context, stating that TV has urged users to flag content to X. This has resulted in a “black box” scenario for the company’s investigations, where typically only a small fraction of material is removed and a limited number of accounts are suspended.

Suzor remarked that it is hard to view Grok as genuinely “seeking the truth” following recent incidents.

“It’s undisputed that Grok is not effectively pursuing truth. I am deeply skeptical of Grok, particularly in light of last week’s events,” he stated.

Berg countered that X’s Grok analytics feature had not been sufficiently updated in response to the chatbot’s output last week, suggesting that the chatbots have “strayed” by disseminating hateful content that is “quite strange.”

Suzor argued that instead of optimizing for truth, Grok had been “modified to align responses more closely with Musk’s ideological perspectives.”

Earlier in the hearing, X’s legal representatives accused the proceedings of attempting to distort the Royal Commission’s focus on certain aspects of X. Cross-examination raised questions regarding pre-existing meetings prior to any actions taken against X employees.

Government attorney Stephen Lloyd stated that X was portraying Esafety as overly antagonistic in their interactions, attributing the “aggressive stance” to X’s leadership.

The hearing is ongoing.

Source: www.theguardian.com