What actions can the UK government take regarding Twitter? Should What are your thoughts on Twitter? What interests does Elon Musk have?
The billionaire proprietor of the social network, still officially referred to as X, has had an eventful week causing disruptions on his platform. Besides his own posts, which include low-quality memes sourced from 8chan and reposted fake concerns from far-right figures, the platform as a whole, along with the other two of the three “T’s,” TikTok and Telegram, briefly played a significant role in orchestrating this chaos.
There is a consensus that action needs to be taken: Bruce Daisley, former VP EMEA at Twitter, proposes individual accountability.
In the near term, Musk and other executives should be reminded of their legal liability for their actions under current laws. The UK’s Online Safety Act 2023 should be promptly bolstered. Prime Minister Keir Starmer and his team should carefully consider if Ofcom, the media regulator frequently criticized for the conduct of organizations like GB News, can effectively manage the rapid behavior of someone like Musk. In my view, the threat of personal consequences is much more impactful on corporate executives than the prospect of a corporate fine. If Musk continues to incite unrest, an arrest warrant could create sparks from his fingertips, though as a jet-setting personality, an arrest warrant could be a compelling deterrent.
Last week, London Mayor Sadiq Khan presented his own suggestion.
“The government swiftly realized the need to reform the online safety law,” Khan told the Guardian in an interview. “I believe that the government must ensure that this law is suitable immediately. I don’t think it currently is.”
“Responsible social media platforms can take action,” Khan remarked, but added that “if they fail to address their own issues, regulation will be enforced.”
When I spoke to Euan McGaughey, a law professor at King’s College London on Monday, he provided more precise recommendations on what actions the government could take. He mentioned that the Communications Act 2003 underlies many of Ofcom’s authorities and is applied to regulate broadcast television and radio, but extends beyond those media.
Simply as section 232 specifies that “television licensable content services” involve distribution “by any means involving the use of an electronic communications network,” this Act empowers Ofcom to regulate online media content. While Ofcom could exercise this power, it is highly improbable as Ofcom anticipates challenges from tech companies, including those fueling riots and conspiracy theories.
Even if the BBC or the government were reluctant to interpret the old law differently, minor modifications could subject Twitter to stricter broadcasting regulatory oversight, he added.
For instance, there is no distinction between Elon Musk posting a video on X about (so-called) two-tier policing, discussing “detention camps” or asserting “civil war is inevitable” and ITV, Sky, or the BBC broadcasting the news… Online Safety Act Grossly insufficient, as the constraints merely aim to prevent “illegal” content and do not inherently address false or dangerous speech.
The law of keeping promises
It may seem peculiar to feel sympathy for an inanimate object, but the Online Safety Act has likely been treated quite harshly given its minimal enforcement. A comprehensive law encompassing over 200 individual clauses, it was enacted in 2023, but most of its modifications will only take effect once Ofcom has completed the extensive consultation process and established a code of practice.
The law introduces a few new offenses, such as bans on cyber-flashing and upskirt photography. Sections of the old law, referred to as malicious communications, have been substituted with new, more precise laws like threatening and false communications, with two of the new offenses going into effect for the first time this week.
But what if this had all happened earlier and Ofcom was operational? Would the outcome have been different?
The Online Safety Act is a peculiar piece of legislation: an effort to curb the worst impulses on the internet, drafted by a government taking a stance in favor of free speech amidst a growing culture war and enforced by regulators staunchly unwilling to pass judgment on individual social media posts.
What transpired was either a skillful act of navigating a tricky situation or a clumsy mishap, depending on who you ask. The Online Safety Act does not outright criminalize everything on the web; instead, it mandates social media companies to establish specific codes of conduct and consistently enforce them. For certain forms of harm like incitement to self-harm, racism, and racial hatred, major services must at least provide adults with the option to opt out of such content and completely block it from children. For illegal content ranging from child abuse imagery to threats and false communications, it requires new risk assessments to aid companies in proactively addressing these issues.
It’s understandable why this legislation faced significant backlash upon its passage: its main consequence was a mountain of new paperwork in which social networks had to demonstrate adherence to what they had always purportedly done: attempting to mitigate racist abuse, addressing child abuse imagery, enforcing their terms of use, and so forth.
Advocates of the law argue that it serves more as a means for Ofcom to impose its promises on companies rather than forcing them to alter their behavior. The easiest way to impose a penalty under the Online Safety Act – potentially amounting to 10% of global turnover if modeled after GDPR – is to announce loudly to customers that steps are being taken to tackle issues on the platform, only to do nothing.
One could envision a scenario where the CEO of a tech company, the key antagonist in this play, stands before an inquiry, solemnly asserting that the reprehensible behavior they witness violates their terms of service, then returning to their office and taking no action.
The challenge for Ofcom lies in the fact that multinational social networks are not governed by cartoonish villains who flout legal departments, defy moderators, and whimsically enforce one set of terms of service on allies and a different one on adversaries.
Except for one.
Do as I say, don’t do as I do
Elon Musk’s Twitter has emerged as a prime test case for online safety laws. On the surface, the social network appears relatively ordinary: its terms of service prohibit the dissemination of much of the same content as other major networks, with a slightly more lenient stance on pornographic material. Twitter maintains a moderation team that employs both automated and human moderation to remove objectionable content, an appeals process for individuals alleging unfair treatment, and progressive penalties that could ultimately lead to account suspensions for violations.
However, there’s an additional layer to how Twitter operates: Elon Musk follows through on what he says. For instance, last summer, after a prominent right-wing influencer shared child abuse images, the account’s creator received a 129-year prison sentence. The motive remains unclear, but the account was swiftly suspended. Musk then intervened:
While Twitter’s terms of service theoretically prohibit many of the egregious posts related to the UK riots, such as “hateful conduct” and “inciting, glorifying, or expressing a desire for violence,” they do not seem to be consistently enforced. This is where Ofcom may potentially take aggressive actions against Musk and his affiliated companies.
If you wish to read the entire newsletter, subscribe to receive TechScape in your inbox every Tuesday.
Source: www.theguardian.com