The Rise of Hate: Exploring Racism, Misogyny, and Deception in X – A Question of Ethics

I I considered leaving Twitter shortly after Elon Musk bought it in 2022 because I didn't want to be part of a community that could potentially be bought, much less by a guy like him. Soon, the nasty “long and intense” bullying of staff began. But I've had some of the most interesting conversations of my life on Twitter, randomly, hanging out, or being invited to talk. “Has anyone else been devastatingly lonely during the pandemic?” “Has anyone had a relationship with a boyfriend or girlfriend from middle school?” We called Twitter a place to tell the truth to strangers (Facebook is a place to lie to friends), and the breadth of it was mutual and wonderful.

After the BlueCheck fiasco, things got even more unpleasant: identity verification became something you could buy, which made you less trustworthy. So I joined a rival platform, Mastodon, but quickly realized I'd never get 70,000 followers like I did on Twitter. I wasn't looking for attention. In itself, But my peers were less diverse and less loud, and my infrequently updated social media feeds gave me the eerie, slightly depressing feeling of walking into a mall only to find that half the stores are closed and the rest are all selling the same thing.

In 2023, the network now known as X began. Sharing advertising revenue with “premium” usersthen I joined Threads (owned by Meta), where all I see are strangers confessing to petty misdemeanors. I stayed with X, where everything is darker. People get paid for engagement indirectly through ads. It's also a bit vague. It's described as “revenue sharing,” but it doesn't tell you which ad revenues were shared with you. So you can't measure revenue per impression. Is X splitting it 50/50? Or is it 10/90? Are they actually paying you to generate hate?

Elon Musk: “Infiltrated into far-right politics” Photo: Getty Images

“What we've seen is that controversial content drives engagement,” says Ed Saperia, president of the London School of Politics and Technology. “Extreme content drives engagement.” It's become possible to make a living creating harmful content. My 16-year-old son noticed this long before I did with Football X. People are going to say obviously wrong things for the clicks of hate. David Cameron Similar to Catherine the GreatBut that's nothing compared to the engagement you get when attacking, say, transgender people. High-profile tweets are surfaced directly to the top of the “for you” feed by a “black box algorithm designed to keep you scrolling,” said Rose Wang, COO of another rival, Blue Sky, which serves up a constant stream of repetitive topics designed to annoy users.

As a result of these changes, “the platform has become inundated with individuals who were previously banned from the platform, ranging from extremely niche accounts to people like Tommy Robinson and Andrew Tate,” says Joe Mulhall, head of research at Hope Not Hate. We saw the impact of this reality this August when misinformation about the identity, ethnicity and religion of the killer of three girls in Southport sparked overtly racist unrest across the UK the likes of which had not been seen since the '70s. “Not only was X responsible for creating an atmosphere for rioting, it was also a central hub for the organisation and distribution of content that led to rioting,” says Mulhall.

A man named Wayne O'Rourke, a “keyboard warrior,” was convicted of inciting racial hatred on social media after the August race riots. Monthly salary of £1,400 From his activities at X. The vocal Laurence Fox last month Earn a similar amount Posted on X. O'Rourke had 90,000 followers, but Tommy Robinson has over a million followers and presumably makes a lot more money.

Meanwhile, governments have no surefire remedy, even when, as Mulhall puts it, “decisions made on the US West Coast clearly impact our communities.” In April, Brazilian President Luiz Inacio Lula da Silva sought to suspend fewer than 100 X accounts for hate speech and fake news, mainly as supporters of his predecessor Jair Bolsonaro challenged the legitimacy of his defeat. X refused, and also declined to defend itself in court. On Monday, Brazil's Supreme Court unanimously upheld the platform-wide ban, saying the platform “considers itself above the rule of law.” From a business perspective, it's surprising that Musk didn't try harder to avoid it, but there may be other things he values ​​more than money, such as exemption from government and democratic constraints.

Tommy Robinson…Musk has rescinded the ban from X. Picture: James Manning/PA

So is it moral to remain on a platform that has done so much to help bring the politics of division and hate from our keyboards into real life? Is X worse than Facebook or TikTok or (wow!) YouTube? And is it intentionally bad? In other words, are we watching Musk's master plan unfold?

“This is not the first time that extremist content has been circulating online,” Saperia says. “There are a lot of bad platforms, and a lot of bad things are happening there.” X's problem may not be bad regulation, he points out, but bad enforcement. And it's not just X's problem. “Have you seen the UK court system these days? Cases from five years ago are being tried. Without the law, society would be impossible.”

While X may be a catalyst for inciting and rallying civil unrest, from the Jan. 6 storming of the U.S. Capitol to Southport and beyond, Saperia says it's important to keep in mind that “politics is shifting rightward, but not just because of the media environment, but also for complex economic reasons: the middle-class West is getting poorer.” Donald Trump may have shocked the traditional U.S. media by speaking directly to voters with his crude and increasingly insane messages, but it's naive to think that a complacent public resting on a prosperous future would embrace his authoritarian moves. Whether social media is funding it or not, the anger is there, and “all the mainstream platforms have generally failed at hate speech,” Mulhall says. “They didn't want this content, but they were struggling to deal with it. And after Charlottesville, they made some progress.” [the white supremacist rally in 2017] Or Capitol Hill.”

Still, Hope Not Hate divides far-right online activity into three strains: mainstream platforms like X, Instagram, and Facebook that are not interested in fascism but are struggling to eradicate it and perhaps do not invest enough in moderation and regulation; hijacked platforms like Discord and Telegram that started as chat sites and messaging services and became the far-right’s favorite chat apps, probably due to their superior privacy or encryption; and bespoke platforms like Rumble (partially funded by fundamentalist libertarian billionaire Peter Thiel), Gab (which became a center of mainly anti-Semitic hate after the gunman of the 2018 Pittsburgh synagogue shooting posted his manifesto there) or Parler, which was acquired by Kanye West in 2022 after he was banned from Instagram and Twitter for anti-Semitism.

Synthesis: Guardian Design; X

“Twitter is unconventional,” Mulhall says. “It's ostensibly a mainstream platform, but now it has its own moderation policies. Elon Musk himself is steeped in far-right politics, so it's behaving like it's its own platform, which is what makes it so different. And it's so much more harmful, so much worse. And it's also because, although it has terms of service, it doesn't necessarily enforce them.”

Musk's commitment to free speech is surprisingly unconvincing. He used it to veto Lula's demands in Brazil, but was happy to oblige Narendra Modi's demands in India, where he suspended hundreds of accounts linked to the Indian farmer protests in February. “Free speech is a tool, not a principle, for Musk,” Mulhall says. “He's a techno-utopian with no attachment to democracy.”

But global civil society finds it very difficult to summarily reject the free speech argument because the counterargument is so dark: that many billionaires – not just Musk, but Thiel of Rumble, Parler's original backer, Rebecca Mercer (daughter of Breitbart funder Robert Mercer), and indirectly, billionaire sovereigns like Putin – have succeeded in transforming society and destroying the trust we have in each other and in institutions. It is much more comfortable to think that they are doing it by chance, simply because they love “free speech,” than to think that they are doing it deliberately. “The key to understanding neo-reactionary and ‘dark enlightenment’ movements is that these individuals have no interest whatsoever in maintaining the status quo,” says Mulhall.

“In some jurisdictions, the actions of state rulers and billionaires are pretty much correlated,” Saperia says. We see that in Russia. “Putin is using the state to manipulate social media to create polarization. That's pretty much proven,” Mulhall says. But where tech and politics don't line up, politics doesn't often prevail. Governments seem pretty powerless in the face of these tech giants. “Racial hatred and attempted murder are being nurtured on these platforms,” ​​Mulhall says. “And people don't even believe it's possible to get Musk to Congress.”

Andrew Tait leaves court in Bucharest. Photo: Alexandre Dobre/AP

In Paris, Telegram founder Pavel Durov is under formal investigation over allegations that the app is linked to organized crime, and Musk is named as a defendant in a cyberbullying lawsuit brought by gold medallist Imane Kheriff. The boxer, who was born female and has never identified as transgender or intersex, has faced defamatory claims about her gender with an X from a number of public figures, including British politician J.K. Rowling and Donald Trump. Meanwhile, Andrew Tait has Charged by Romanian authorities He writes about human trafficking and rape, but his online The fantasy of misogyny The policy, which has far-reaching implications around the world, of treating women as a slave class has not received the same condemnation as YouTube, Insta, TikTok and Facebook's bans from their platforms, while the freedom to operate freely on X has lessened the impact of these bans and led to them being reversed. The EU has at least been more successful than the US in holding social media giants to the same corporate responsibility as, say, pharmaceutical or oil companies, but regulations are still scrambling to keep up with a changing reality where the sector is moving from the virtual to the real world at an ever-increasing rate.

But governments don't need to step in and tell us to stop using X. We can do it ourselves. Brazilians who don't use Twitter are migrating to Bluesky, which Twitter co-founder Jack Dorsey founded in 2019. “We've had a tumultuous four days alone. As of this morning, we've added nearly 2 million new users,” Bluesky's Wang said Monday. If we all did that (I did!), would the power of X disappear? Or will it just be divided into good and bad places?

Bluesky serves a similar purpose to X, but is designed quite differently. Wang explains: “No one organization controls the platform. All the code is open source, and anyone can copy and paste the entire code. We don't own your data; you can take it wherever you want. We have to acquire your users through performance, or you'll go away. It's a lot like how search engines work: if you make them attractive by putting ads everywhere, people will go to another search engine.”

www.theguardian.com

Chinese tech company promises to combat online hate speech following knife attack

Chinese internet companies have made a commitment to combat “extreme nationalism” online, specifically targeting anti-Japanese sentiment. This decision comes after a tragic incident in Suzhou, where a Chinese woman lost her life while trying to protect a Japanese mother and child.

The leading companies Tencent and NetEase have stated that they will actively investigate and ban users who promote hatred and incite conflict.

A spokesperson for Tencent, the operator of messaging app WeChat, mentioned that the incident in Jiangsu province has garnered significant public attention, with some internet users fueling tensions between China and Japan, leading to a surge in extreme nationalism.

Following the arrest of an unemployed man for the stabbing incident, which resulted in the death of the Chinese woman who intervened, there has been a mix of reactions online ranging from celebrating heroism to expressing nationalistic sentiments.

Social media platforms like Weibo and Douyin have highlighted the presence of extreme nationalistic and xenophobic content and are actively working to address these issues. This move marks a significant shift as such sentiments have been prevalent on China’s internet with minimal intervention.

In the wake of the Suzhou tragedy, online users have drawn parallels between xenophobic content online and real-world violence, emphasizing the need for regulation to prevent further incidents. Internet companies have reported removing a substantial amount of illegal content and taking action against violating posts.

Despite the efforts by internet companies, some individuals have criticized the crackdown on anti-Japan content, revealing differing perspectives within the online community. Chinese authorities have labeled the knife attack as an isolated event, in contrast to previous incidents involving foreigners.

Further research by Lin Zhihui

Source: www.theguardian.com

Concerns Raised Over Potential Further Censorship of Pro-Palestinian Content in Meta’s Hate Speech Policy Review

The Guardian confirmed that Meta is considering expanding and “reconsidering” its hate speech policy regarding the term “Zionist.” On Friday, the company contacted and met with more than a dozen Arab, Islamic, and pro-Palestinian groups to discuss plans to review its policies to ensure that “Zionist” is not used as a substitute for Jewish or Israeli. An email seen by the Guardian revealed this information.

According to an email sent by Meta representatives to invited groups, the current policy allows the use of “Zionist” in political discussions as long as it does not refer to Jewish people in an inhumane or violent manner. The term will be removed if it is used explicitly on behalf of or on behalf of Israelis. The company is considering this review in response to recent posts reported by users and “stakeholders,” as reported by The Intercept.

Senator demands answers on reports of Meta censoring pro-Palestinian content

Another organization received an email from a Meta representative stating that the company’s current policy does not allow users to attack others based on protected characteristics and that a current understanding of language people use to refer to others is necessary. The email also mentioned that “Zionist” often refers to the ideology of an unprotected individual but can also refer to Jews and Israelis. The organizations participating in the discussions expressed concerns about the changes leading to further censorship of pro-Palestinian voices.

In addition, Meta gave examples of posts that would be removed, including a post calling Zionists rats. The company has been criticized for unfairly censoring Palestinian-related content, which raises concerns about the enforcement of these policies.

In response to a request for comment, Meta spokesperson Corey Chambliss shared a previous statement regarding the “increasing polarized public debate.” He added that Meta is considering whether and how it can expand its nuanced response to such language and will continue to consult with stakeholders to improve the policy. Policy discussions take place during high-stakes periods of conflict, and accurate information and its dissemination can have far-reaching effects.

More than 25,000 Palestinians have been killed since the attack on Gaza began in October 2023. Implementing a policy like this in the midst of a genocide is extremely problematic, and it may cause harm to the community, as stated by an official from the American Arab Anti-Discrimination Committee.

Source: www.theguardian.com