Understanding Britain’s Debt Through Biscuits: How Labour MPs Embrace Viral Trends

A recurring question for progressives is how to create a straightforward and impactful message regarding the economy. A Labor MP discovered the solution through a few bags of M&S biscuits.

Gordon McKee, who represents Glasgow South, shared: Over 3.3 million views on X. In a brief 101-second video, he employs a stack of custard creams and chocolate bourbons to represent Britain’s debt-to-GDP ratio.

While this may not appear to be a monumental achievement, it’s worth noting that some of the world’s most prominent politicians (such as Donald Trump, Nigel Farage, and Zoran Mamdani) have effectively utilized well-crafted short videos to disseminate their campaign messages.

Yet, within the Parliamentary Labor Party, Mr. McKee stands out as a pioneer— the only backbencher known to have enlisted digital content creators.

This choice proved fruitful, as a series of professionally produced videos utilizing the popular Grubby analogy were crafted to achieve viral status. In recent weeks, several of his colleagues, including Leeds East MP Richard Burgon, have begun to follow his lead.


“I feel like I owe an apology for starting this!” McKee remarked humorously, asserting that digital communication and campaign strategies are now essential for politicians.

He aims to release several such videos each week, focusing on platforms like Instagram, TikTok, and YouTube shorts. Unlike X, these platforms can engage audiences beyond the politically active.

“Last week I spoke at a local high school and asked how many read a daily newspaper; only one hand went up. When I asked how many used Instagram, every hand shot up,” he noted.

“While there’s been a significant shift in how people consume information in the last decade, the communication methods of politicians and MPs with their constituents have not kept pace.”

Signs indicate that the Labor machinery is gearing up. On November 21, Keir Starmer emailed Labor MPs to announce the party’s “significant investment” in a “new comprehensive training program” for digital campaigning.

Internally, the party unveiled what it dubbed a “second phase strategy” to modernize its campaigning using social media and an app called Labor One, acknowledging that “the way we campaigned in 2024 isn’t enough to secure victory in 2029.”

Diet members have started taking initiative as well. Burgon employed 200 packs of Sainsbury’s Fusilli to show how £1 billion compares to the UK average salary of £33,000. His video garnered around 650,000 views on X.

“This past weekend, while touring church fairs in my district, I was surprised by how many people had seen this video,” Burgon stated. “I’ve been advocating for a wealth tax for some time and thought this would be a creative way to spread the message.”

The 106kg pasta mountain purchased by Mr. Burgon’s parliamentary team was donated to a London food bank after it became impractical to transport it to Leeds.

Loughborough MP and economist Jieven Sander noted: We produced a James Bond-themed video during Budget Week to discuss the various factors influencing government bond interest rates. “I’d love it if people read my 2,000-word essays, but they don’t. I need to find a way to make them engaging,” he expressed.

He relies on his existing parliamentary staff equipped with smartphones and a ring light mounted on a tripod in his office to create online content. Social media planning features in their regular weekly status meetings.

When asked whether the government should take more steps to motivate MPs to modernize their communications, Mr. Sander expressed concerns about potential restrictions.

“When communication is more organic and people comprehend the government’s message in diverse ways, it tends to work better,” he pointed out. “If there’s a unified vision, everyone should be able to understand the script.”

Several incoming ministers are also branching out on social media, including Treasury Secretary Dan Tomlinson, who recorded a casual pre-Budget video while heading to Greg’s for donuts. Westminster Underground Station. AI Minister Kanishka Narayan filmed a video with my iPhone discussing the advancement of technology in the UK.

Some ministers are also engaging in this trend. Housing Secretary Steve Reid held an “Ask Me Anything” session on Reddit concerning plans to reopen local pubs in September. Energy Secretary Ed Miliband, a long-time enthusiast of vertical videos, stated: Used ASMR to promote a government announcement about small modular reactors (SMR).

“During the general election, we had a significant team to support individuals in these efforts, but now they must undertake it within their own offices,” a Labor source remarked. “It’s more challenging when you’re not on the offensive and need to defend or create a positive narrative. This is why creativity is essential. It’s a tough skill to master, but it’s absolutely crucial.”

Mr. Mackie argued that this challenge is particularly pronounced for the left because right-wing figures like Mr. Farage and shadow attorney general Robert Jenrick excel at telling very clear and straightforward stories across various platforms.

“The task for progressives is to convey complex arguments that are realistic, aspirational, practical, and attainable, while doing so in a captivating and engaging manner,” he commented.

Source: www.theguardian.com

Roblox Controversy: Experts and MPs Urge Online Gaming Platforms to Embrace Australia’s Under-16 Social Media Ban

Increasing concerns have been raised regarding the federal government’s need to tackle the dangers that children face on the widely-used gaming platform Roblox, following a report by Guardian Australia that highlighted a week of incidents involving virtual sexual harassment and violence.

While role-playing as an 8-year-old girl, the reporter encountered a sexualized avatar and faced cyberbullying, acts of violence, sexual assault, and inappropriate language, despite having parental control settings in place.

From December 10, platforms including Instagram, Snapchat, YouTube, and Kick will be under Australia’s social media ban preventing Australians under 16 from holding social media accounts, yet Roblox will not be included.

Independent councillor Monique Ryan labeled this exclusion as “unexplainable.” She remarked, “Online gaming platforms like Roblox expose children to unlimited gambling, cloned social media apps, and explicit content.”

At a press conference on Wednesday, eSafety Commissioner Julie Inman Grant stated that platforms would be examined based on their “singular and essential purpose.”

“Kids engaging with Roblox currently utilize chat features and messaging for online gameplay,” she noted. “If online gameplay were to vanish, would kids still use the messaging feature? Likely not.”

Sign up: AU breaking news email

“If these platforms start introducing features that align them more with social media companies rather than online gaming ones, we will attempt to intervene.”

According to government regulations, services primarily allowing users to play online games with others are not classified as age-restricted social media platforms.


Nonetheless, some critics believe that this approach is too narrow for a platform that integrates gameplay with social connectivity. Nyusha Shafiabadi, an associate professor of information technology at Australian Catholic University, asserts that Roblox should also fall under the ban.

She highlighted that the platform enables players to create content and communicate with one another. “It functions like a restricted social media platform,” she observed.

Independent MP Nicolette Boere urged the government to rethink its stance. “If the government’s restrictions bar certain apps while leaving platforms like Roblox, which has been called a ‘pedophile hellscape’, unshielded, we will fail to safeguard children and drive them into more perilous and less regulated environments,” she remarked.

Communications minister spokesperson Annika Wells mentioned that excluding Roblox from the teen social media ban does not imply that it is free from accountability under the Online Safety Act.

A representative from eSafety stated, “We can extract crucial safety measures from Roblox that shield children from various harms, including online grooming and sexual coercion.”

eSafety declared that by the year’s end, Roblox will enhance its Age Verification Technology, which restricts adults from contacting children without explicit parental consent and sets accounts to private by default for users under 16.

“Children under 16 who enable chat through age estimation will no longer be permitted to chat with adults. Alongside current protections for those under 13, we will also introduce parental controls allowing parents to disable chat for users between 13 and 15,” the spokesperson elaborated.

Should entities like Roblox not comply with child safety regulations, authorities have enforcement capabilities, including fines of up to $49.5 million.

Skip past newsletter promotions

eSafety stated it will “carefully oversee Roblox’s adherence to these commitments and assess regulatory measures in the case of future infractions.”

Joanna Orlando, an expert on digital wellbeing from Western Sydney University, pointed out that Roblox’s primary safety issues are grooming threats and the increasing monetization of children engaging with “the world’s largest game.”

She mentioned that it is misleading to view it solely as a video game. “It’s far more significant. There are extensive social layers, and a vast array of individuals on that platform,” she observed.

Green Party spokesperson Sarah Hanson-Young criticized the government for “playing whack-a-mole” with the social media ban.

“We want major technology companies to assume responsibility for the safety of children, irrespective of age,” she emphasized.

“We need to strike at these companies where it truly impacts them. That’s part of their business model, and governments hesitate to act.”

Shadow communications minister Melissa Mackintosh also expressed her concerns about the platform. She stated that while Roblox has introduced enhanced safety measures, “parents must remain vigilant to guard their children online.”

“The eSafety Commissioner and the government carry the responsibility to do everything within their power to protect children from the escalating menace posed by online predators,” she said.

A representative from Roblox stated that the platform is “dedicated to pioneering safety through stringent policies that surpass those of other platforms.”

“We utilize AI to scrutinize games for violating content prior to publication, we prohibit users from sharing images or videos in chats, and we implement sophisticated text filters designed to prevent children from disclosing personal information,” they elaborated.




Source: www.theguardian.com

British MPs Warn of Potential Violence in 2024 Due to Unchecked Online Misinformation

Members of Parliament have cautioned that if online misinformation is not effectively tackled, it is “just a matter of time” before viral content leads to a resurgence of violence in the summer of 2024.

Chi Onwurah, chair of the Commons science and technology select committee, expressed concern that ministers seem complacent regarding the threat, placing public safety in jeopardy.

The committee voiced its disappointment with the government’s reaction to a recent report indicating that the business models of social media companies are contributing to unrest following the Southport murders.

In response to the committee’s findings, the government dismissed proposals for legislation aimed at generative artificial intelligence platforms, maintaining that it would refrain from direct intervention in the online advertising sector, which MPs argued has fostered the creation of harmful content post-attack.

Onwurah noted that while the government concurs with most conclusions, it fell short of endorsing specific action recommendations.

Onwurah accused ministers of compromising public safety, stating: “The government must urgently address the gaps in the Online Safety Act (OSA); instead, it seems satisfied with the harm caused by the viral proliferation of legal but detrimental misinformation. Public safety is at stake, and it’s only a matter of time before we witness a repeat of the misinformation-driven riots of summer 2024.”

In their report titled ‘Social Media, Misinformation and Harmful Algorithms’, MPs indicated that inflammatory AI-generated images were shared on social media following the stabbing that resulted in the deaths of three children, warning that AI tools make it increasingly easier to produce hateful, harmful, or misleading content.

In a statement released by the commission on Friday, the government stated that no new legislation is necessary, insisting that AI-generated content already falls under the OSA, which regulates social media content. They argued that new legislation would hinder its implementation.

However, the committee highlighted Ofcom’s evidence, where officials from the communications regulator admitted that AI chatbots are not fully covered by the current legislation and that further consultation with the tech industry is essential.

The government also declined to take prompt action regarding the committee’s recommendation to establish a new entity aimed at addressing social media advertising systems that allow for the “monetization of harmful and misleading content,” such as misinformation surrounding the Southport murders.

In response, the government acknowledged concerns regarding the lack of transparency in the online advertising market and committed to ongoing reviews of industry regulations. They added that stakeholders in online advertising seek greater transparency and accountability, especially in safeguarding children from illegal ads and harmful products and services.

Addressing the commission’s request for additional research into how social media algorithms amplify harmful content, the government stated that Ofcom is “best positioned” to determine if an investigation should be conducted.

In correspondence with the committee, Ofcom indicated that it has begun working on a recommendation algorithm but acknowledged the necessity for further exploration across a broader spectrum of academic and research fields.

The government also dismissed the commission’s call for an annual report to Parliament concerning the current state of online misinformation, arguing that it could hinder efforts to curtail the spread of harmful online information.

The British government defines misinformation as the careless dissemination of false information, while disinformation refers to the intentional creation and distribution of false information intended to cause harm or disruption.

Onwurah highlighted concerns regarding AI and digital advertising as particularly troubling. “Specifically, the inaction on AI regulation and digital advertising is disappointing,” she stated.

“The committee remains unconvinced by the government’s assertion that the OSA adequately addresses generative AI, and this technology evolves so swiftly that additional efforts are critically needed to manage its impact on online misinformation.

“And how can we combat that without confronting the advertising-driven business models that incentivize social media companies to algorithmically amplify misinformation?”

Source: www.theguardian.com

British MPs Demand Investigation into TikTok’s Plan to Eliminate 439 Content Moderators

Labor unions and online safety advocates are urging Members of Parliament to examine TikTok’s decision to eliminate hundreds of content moderation jobs based in the UK.

The social media platform intends to reduce its workforce by 439 positions within its trust and safety team in London, raising alarms about the potential risks to online safety associated with these layoffs.

Conferences from trade unions, communication unions, and prominent figures in online safety have authored an open letter to Chi Onwurah MP, who chairs Labour’s science, innovation, and technology committee, seeking an inquiry into these plans.

The letter references estimates from the UK’s data protection authority indicating that as many as 1.4 million TikTok users could be under the age of 13, cautioning that these reductions might leave children vulnerable to harmful content. TikTok boasts over 30 million users in the UK.

“These safety-focused staff members are vital in safeguarding our users and communities against deepfakes, harm, and abuse,” the letter asserts.

Additionally, TikTok has suggested it might substitute moderators with AI-driven systems or workers from nations like Kenya and the Philippines.




How TikTok harms boys and girls differently – video

The signatories also accuse the Chinese-owned TikTok of undermining the union by announcing layoffs just eight days prior to a planned vote on union recognition within the CWU technology sector.

“There is no valid business justification for enacting these layoffs. TikTok’s revenue continues to grow significantly, with a 40% increase. Despite this, the company has chosen to make cuts. We perceive this decision as an act of union-busting that compromises worker rights, user safety, and the integrity of online information,” the letter elaborates.

Among the letter’s signatories are Ian Russell, the father of Molly Russell, a British teenager who took her life after encountering harmful online content, former meta-whistleblower Arturo Bejar, and Sonia Livingstone, a social psychology professor at the London School of Economics.

The letter also urges the commission to evaluate the implications of job cuts on online safety and worker rights, and to explore legal avenues to prevent content moderation from being outsourced and to keep human moderators from being replaced by AI.

When asked for comments regarding the letter, Onwurah noted that the layoff strategy suggests TikTok’s content moderation efforts are under scrutiny, stating, “The role that recommendation algorithms play on TikTok and other platforms in exposing users to considerable amounts of harmful and misleading content is evident and deeply troubling.”

Skip past newsletter promotions

Onwurah mentioned that the impending job losses were questioned during TikTok’s recent appearance before the committee, where the company reiterated its dedication to maintaining security on its platform through financial investments and staffing.

She remarked: “TikTok has conveyed to the committee its assurance of maintaining the highest standards to safeguard both its users and employees. How does this announcement align with that commitment?”

In response, a TikTok representative stated: “We categorically refute these allegations. We are proceeding with the organizational restructuring initiated last year to enhance our global operational model for trust and safety. This entails reducing the number of centralized locations worldwide and leveraging technological advancements to improve efficiency and speed as we develop this essential capability for the company.”

TikTok confirmed it is engaging with the CWU voluntarily and has expressed willingness to continue discussions with the union after the current layoff negotiations are finalized.

quick guide

Contact us about this story






show


The best public interest journalism relies on first-hand reporting from those in the know.

If you have insights regarding this matter, please reach out to us confidentially using the methods outlined below.

Secure messaging in the Guardian app

The Guardian app features a tool for submitting tips. All messages are encrypted end-to-end and seamlessly integrated into everyday use of Guardian mobile apps, keeping your communication private.

If you don’t have the Guardian app, download it (iOS/Android) and navigate to the menu. Select “Secure Messaging.”

SecureDrop, instant messaging, email, phone, mail

If you can access the Tor network safely and privately, you can send messages and documents to the Guardian through our SecureDrop platform.

Lastly, our guidelines at theguardian.com/tips provide multiple secure contact methods, detailing their advantages and disadvantages.


Illustration: Guardian Design/Rich Cousins

Thank you for your feedback.


Source: www.theguardian.com

Musiciansal Performance Society (MPS) emphasizes protecting artists’ rights in AI copyright discussion

The MP’s two cross-party committees are urging the government to prioritize ensuring fair rewards for creators for their creative work and to facilitate the training of artificial intelligence models.

Lawmakers are advocating for more transparency in the data used to train generative AI models and urging the government not to implement plans that require creators to opt out of using such data.

The government’s proposed solution to the AI-copyright law tension includes exceptions for AI companies to train models with copyrighted work under “text and data mining,” while providing creators the option to opt out of the “rights reserve” system.

Caroline Dinage, chairman of the Culture, Media and Sports Committee, expressed concern over the response of the creative industry to the proposal, highlighting the threat to artists’ hard-earned success from unauthorized use of their work.

She emphasized the importance of fair treatment for creators and the need for transparency in data used to train AI models to ensure proper rewards for their work.

The Culture, Media, Sports Commission, Science, Science, Innovation and Technology Commission responded to government consultations on AI and copyright after a joint evidence session with representatives from AI startups and creative industries.

Letter to the Minister will enhance government transparency about training data, protect opt-out copyright holders, and empower consumers to make informed choices about AI models.

Failure to address these issues could disproportionately impact smaller creators and journalists operating under financial constraints, according to the letter.

Concerns among celebrities and the creative industry about government AI proposals have led to protests, with musicians releasing silent albums in protest.

Skip past newsletter promotions

The letter also highlighted the need for transparency in training data for AI models, citing examples from the EU and California which have introduced requirements for detailed technical records on training data.

The government is considering revenue-sharing models for AI developers to address copyright concerns and is urged to conduct full impact assessments on proposed options.

The letter cautioned against AI developers moving to jurisdictions with more lenient rules and emphasized the need for compliance, enforcement, and remedies for copyright issues.

Source: www.theguardian.com

AI tools can offer ministers a ‘tone check’ on the reception of their policies by MPs

A new artificial intelligence tool known as Parlex can notify ministers about the potential unpopularity of a policy within their own party through a search called “parliamentary mood check.”

Parlex is just one of the AI tools being developed for ministers and civil servants to anticipate issues with backbenchers and pinpoint supportive legislators.

By inputting a policy outline like a 20 mph speed limit, the tool can predict how legislators will respond based on their past contributions in Congress. A demonstration video on the government website demonstrates historical opposition from Conservative MPs and support from Labour MPs for traffic calming measures.

Described as a “vibe check,” the tool helps policy teams understand the political landscape and develop response strategies before formally proposing a policy in Congress.

According to a report by The Times, key MPs like Iain Duncan Smith and former MP Tobias Ellwood oppose the 20mph limit, while Labour MP Kerry McCarthy supports traffic calming measures.

The tool is expected to be more beneficial for civil servants than ministers who should already possess a good understanding of congressional views.

Chancellor Keir Starmer recently announced an AI plan involving significant investments in Britain’s computing capacity to integrate the technology into the nation’s infrastructure.

The government’s initiatives include releasing public data to foster AI businesses, including anonymized NHS data for research and innovation purposes with strong privacy safeguards in place.

Ministers believe AI could stimulate Britain’s economic growth and generate an estimated economic boost of up to £470bn over the next decade.

Parlex is just one of many AI tools being developed within the government, with other tools like Redbox aimed at automating document analysis for civil servants.

Skip past newsletter promotions

The tool will soon be available to all civil servants in the Cabinet Office and DSIT, aiming to streamline manual processes and increase efficiency.

Another program called Consult generates revenue by automating consultation processes, allowing civil servants to better analyze and act on public opinion.

The Department for Work and Pensions has also utilized AI, including ‘whitemail’ to analyze letters received daily and communicate information more effectively.

However, challenges have arisen, such as inaccuracies in identifying housing benefit fraud suspects due to government algorithms underperforming.

Source: www.theguardian.com

MPs Call on Elon Musk to Testify about X’s Involvement in UK Summer Riots | Social Media Involvement

MPs in a parliamentary inquiry into the UK riots and the proliferation of false and harmful AI content are set to call on Elon Musk to testify about X’s role in spreading disinformation, as reported by The Guardian.

Additionally, senior executives from Meta and TikTok, the companies behind Facebook and Instagram, are expected to be summoned for questioning as part of the Commons Science and Technology Select Committee’s social media inquiry.

The first public hearing is scheduled for the new year, amidst concerns that current online safety laws in Britain are at risk of being outpaced by advancing technology and the politicization of platforms like X.

Images shared on Facebook and X were reportedly used to incite Islamophobic protests following the tragic deaths of three schoolgirls in Southport in August. The inquiry aims to investigate the impact of generative AI and examine Silicon Valley’s business models that facilitate the spread of misleading and potentially harmful content.

The Chairman of the Labour Party Select Committee, Chi Onwura, expressed interest in questioning Musk about his stance on freedom of expression and disinformation. Musk, the owner of X, has been critical of the UK government and was not invited to an international investment summit in September.

Former Labour Secretary Peter Mandelson has called for an end to Musk’s feud with the British government, emphasizing the importance of not overlooking Musk’s influence in the technological and commercial space.

Despite speculation, it remains uncertain whether Musk will testify in the UK, as he is reportedly gearing up for a senior role in President Trump’s White House. Amidst these developments, millions of X users are said to have migrated to a new platform called Bluesky, raising concerns about misinformation and the presence of previously banned users.

The investigation also aims to explore the connection between social media algorithms, generative AI, and the dissemination of false or harmful content. Additionally, the use of AI to complement search engines, such as Google, will be scrutinized in light of recent instances of false and racist claims propagated on online platforms.

In response to the spread of misinformation and incitement after the Southport killings, Ofcom, the UK’s communications regulator, has highlighted the need for social media companies to address activity that incites violence or promotes false behavior. New rules under the Online Safety Act will require companies to take action to prevent the spread of illegal content and minimize security risks.

Source: www.theguardian.com