Labor unions and online safety advocates are urging Members of Parliament to examine TikTok’s decision to eliminate hundreds of content moderation jobs based in the UK.
The social media platform intends to reduce its workforce by 439 positions within its trust and safety team in London, raising alarms about the potential risks to online safety associated with these layoffs.
Conferences from trade unions, communication unions, and prominent figures in online safety have authored an open letter to Chi Onwurah MP, who chairs Labour’s science, innovation, and technology committee, seeking an inquiry into these plans.
The letter references estimates from the UK’s data protection authority indicating that as many as 1.4 million TikTok users could be under the age of 13, cautioning that these reductions might leave children vulnerable to harmful content. TikTok boasts over 30 million users in the UK.
“These safety-focused staff members are vital in safeguarding our users and communities against deepfakes, harm, and abuse,” the letter asserts.
Additionally, TikTok has suggested it might substitute moderators with AI-driven systems or workers from nations like Kenya and the Philippines.
How TikTok harms boys and girls differently – video
The signatories also accuse the Chinese-owned TikTok of undermining the union by announcing layoffs just eight days prior to a planned vote on union recognition within the CWU technology sector.
“There is no valid business justification for enacting these layoffs. TikTok’s revenue continues to grow significantly, with a 40% increase. Despite this, the company has chosen to make cuts. We perceive this decision as an act of union-busting that compromises worker rights, user safety, and the integrity of online information,” the letter elaborates.
Among the letter’s signatories are Ian Russell, the father of Molly Russell, a British teenager who took her life after encountering harmful online content, former meta-whistleblower Arturo Bejar, and Sonia Livingstone, a social psychology professor at the London School of Economics.
The letter also urges the commission to evaluate the implications of job cuts on online safety and worker rights, and to explore legal avenues to prevent content moderation from being outsourced and to keep human moderators from being replaced by AI.
When asked for comments regarding the letter, Onwurah noted that the layoff strategy suggests TikTok’s content moderation efforts are under scrutiny, stating, “The role that recommendation algorithms play on TikTok and other platforms in exposing users to considerable amounts of harmful and misleading content is evident and deeply troubling.”
After newsletter promotion
Onwurah mentioned that the impending job losses were questioned during TikTok’s recent appearance before the committee, where the company reiterated its dedication to maintaining security on its platform through financial investments and staffing.
She remarked: “TikTok has conveyed to the committee its assurance of maintaining the highest standards to safeguard both its users and employees. How does this announcement align with that commitment?”
In response, a TikTok representative stated: “We categorically refute these allegations. We are proceeding with the organizational restructuring initiated last year to enhance our global operational model for trust and safety. This entails reducing the number of centralized locations worldwide and leveraging technological advancements to improve efficiency and speed as we develop this essential capability for the company.”
TikTok confirmed it is engaging with the CWU voluntarily and has expressed willingness to continue discussions with the union after the current layoff negotiations are finalized.
quick guide
Contact us about this story
show
show
The best public interest journalism relies on first-hand reporting from those in the know.
If you have insights regarding this matter, please reach out to us confidentially using the methods outlined below.
Secure messaging in the Guardian app
The Guardian app features a tool for submitting tips. All messages are encrypted end-to-end and seamlessly integrated into everyday use of Guardian mobile apps, keeping your communication private.
If you don’t have the Guardian app, download it (iOS/Android) and navigate to the menu. Select “Secure Messaging.”
SecureDrop, instant messaging, email, phone, mail
If you can access the Tor network safely and privately, you can send messages and documents to the Guardian through our SecureDrop platform.
Lastly, our guidelines at theguardian.com/tips provide multiple secure contact methods, detailing their advantages and disadvantages.
Illustration: Guardian Design/Rich Cousins
Source: www.theguardian.com
New Online Safety Regulations Put Hundreds of TikTok UK Moderators’ Jobs in Jeopardy
TikTok is jeopardizing the roles of hundreds of UK content moderators, despite the implementation of stricter regulations aimed at curbing the dissemination of harmful materials online.
The popular video-sharing platform announced that hundreds of positions within its trust and safety teams could be impacted in the UK, as well as South and Southeast Asia, as part of a global reorganization effort.
Their responsibilities have been shifted to other European locations and third-party contractors, with some trust and safety roles still remaining in the UK, the company clarified.
This move aligns with TikTok’s broader strategy to utilize artificial intelligence for content moderation. The company stated that over 85% of materials removed for violating community guidelines have been identified and deleted through automation.
The reduction poses challenges for companies, necessitating age verification checks for users accessing potentially harmful content, even with new UK online safety laws now in effect. Organizations risk fines of up to £18 million or 10% of global revenue for non-compliance.
John Chadfield from the Communication Workers Union expressed concerns that replacing human moderators with AI could endanger the safety of millions of TikTok users.
“TikTok employees have consistently highlighted the real-world implications of minimizing human moderation teams in favor of hastily developed AI solutions,” he remarked.
TikTok, which is owned by the Chinese tech firm ByteDance, has a workforce of over 2,500 in the UK.
In the past year, TikTok has decreased its trust and safety personnel globally, often substituting automated systems for human workers. In September, the company laid off an entire team of 300 content moderators in the Netherlands, and in October, it disclosed plans to replace approximately 500 content moderation staff in Malaysia as part of its shift towards AI.
Recently, TikTok employees in Germany conducted a strike against the layoffs in its trust and safety team.
After the newsletter promotion
Meanwhile, TikTok’s business is thriving. Accounts filed with Companies House reveal that combined operations in the UK and Europe reached $6.3 billion (£4.7 billion) in 2024, representing a 38% increase from the year before. The operating loss decreased from $1.4 billion in 2023 to $485 million.
A TikTok spokesperson stated that the company is “continuing the reorganization initiated last year to enhance its global operational model for reliability and safety.” This involves a focus on fewer global locations to increase efficiency and speed in the evolution of this essential function for technological progress.
Source: www.theguardian.com
Meta Sued in Ghana for Effects of Extreme Content on Moderators
Meta is now facing a second lawsuit in Africa related to the psychological trauma endured by content moderators tasked with filtering out disturbing material on social media, including depictions of murder, extreme violence, and child sexual abuse.
A lawyer is preparing to take legal action against a contractor of Meta, the parent company of Facebook and Instagram, following discussions with moderators at a facility in Ghana that reportedly employs around 150 individuals.
Moderators at Magilal in Accra report suffering from depression, anxiety, insomnia, and substance abuse directly linked to their responsibilities of reviewing extreme content.
The troubling conditions faced by Ghanaian workers have come to light through a collaborative investigation by the Guardian and the Bureau of Investigative Journalism.
This issue arose after over 140 Facebook content moderators in Kenya were diagnosed with severe post-traumatic stress disorder due to their exposure to traumatic social media content.
The Kenyan workers were employed by Samasource, an outsourcing company that recruits personnel from across Africa for content moderation tasks for Meta. The Magilal facility, central to the allegations in Ghana, is owned by the French multinational Teleperformance.
One individual, who cannot be identified for legal reasons, disclosed that he attempted suicide due to his work. His contract has since expired, and he claims to have returned to his home country.
Facebook and similar large social media platforms often employ numerous content moderators in some of the world’s most impoverished regions, tasked with removing posts that violate community standards and aiding in training AI systems for the same purpose.
Content moderators are required to review distressing and often brutal images and videos to determine if they should be taken down from Meta’s platform. According to reports from Ghanaian workers, they have witnessed videos including extreme violence, such as people being skinned alive or women being decapitated.
Moderators have claimed that the mental health support provided by the company is inadequate, lacking professional oversight, and there are concerns that personal disclosures regarding the impact of their work are being circulated among management.
Teleperformance contested this claim, asserting that they employed a licensed mental health professional, duly registered with a local regulatory body, who possesses a master’s degree in psychology, counseling, or a related mental health field.
The legal action is initiated by the UK-based nonprofit Foxglove. This marks the second lawsuit filed by an African content moderator, following the lawsuit from Kenya’s Samasource workers in December.
Foxglove has stated they will “immediately investigate these alarming reports of worker mistreatment,” with the goal of employing “all available methods, including potential legal action,” to enhance working conditions.
They are collaborating with Agency Seven Seven, a Ghanaian firm, to prepare two potential cases. One could involve claims of unfair dismissal, including a group of moderators who allege psychological harm, along with an East African moderator whose contract ended following a suicide attempt.
Martha Dark, co-executive director at Foxglove, remarked:
“In Ghana, Meta seems to completely disregard the humanity of the crucial safety personnel that all interests rely on—content moderators.
Dark noted that the base wages for content moderators in Accra fall below the living wage, with pressures to work overtime. Moderators reportedly face pay deductions for not meeting performance targets, she indicated.
Contracts obtained by the Guardian show that starting wages are around 1,300 Ghanaian Cedis per month. This base pay is supplemented by a performance-related bonus system, with the highest earnings reaching approximately 4,900 Cedis (£243) per month, significantly less than what is needed for a decent living, according to living costs in Accra.
A spokesperson for Teleperformance stated that content moderators receive “a competitive salary and benefits,” including a monthly income approximately 10 times the national minimum wage for local moderators, and 16 times the minimum wage from other countries, along with project allowances and other benefits, all automatically provided and not contingent on performance.
Foxglove researcher Michaela Chen observed that some moderators are crammed into tight living spaces: “Five individuals were packed into a single room.” She mentioned the existence of a secretive culture of surveillance from managers that monitors workers even during breaks.
This surveillance extends to the work of Meta moderators. She stated: “Workers dedicate all day to the Meta platform, adhering to Meta’s standards and utilizing its systems, yet they are constantly reminded, ‘You’re not working for Meta,’ and are prohibited from disclosing anything to anyone.”
Teleperformance asserted that the moderators are housed in one of Accra’s most luxurious and well-known residential and commercial zones.
The spokesperson described the accommodation as “secure and offering high levels of safety,” complete with recreational facilities such as air conditioning, a gym, and a pool.
Agency Seven Seven partner Carla Olympio believes personal injury claims could succeed in Ghanaian courts, stating they would set a legal precedent that acknowledges employee protections extend to psychological and physical harm.
“[There exists] a gap in our laws as they do not adequately address advancements in technology and virtual work,” she expressed.
Rosa Curling, co-director at Foxglove, has called upon the court to “mandate immediate reforms in the work environment for content moderators,” ensuring proper protective measures and mental health care.
A Teleperformance spokesperson stated: “We are committed to addressing content moderation in Ghana. We fully disclose the type of content moderators may encounter throughout the hiring process, employee contracts, training sessions, and resilience assessments, while actively maintaining a supportive atmosphere for our content moderators.”
Meta commented that the companies it partners with are “contractually obligated to ensure that employees engaged in content reviews on Facebook and Instagram receive adequate support that meets or exceeds industry standards.”
The tech giant further stated it “places great importance on the support provided to content reviewers,” detailing expectations for counseling, training, and other resources when engaging with outsourced companies.
All content moderators indicated they had signed a non-disclosure agreement due to the sensitivity of user information they handle for their safety; however, they are permitted to discuss their experiences with medical professionals and counselors.
Source: www.theguardian.com
Over 140 Facebook moderators in Kenya diagnosed with severe PTSD from digital media duties
Over 140 Facebook content moderators have been diagnosed with severe post-traumatic stress disorder as a result of being exposed to distressing social media content, including violent acts, suicides, child abuse, and terrorism.
Dr. Ian Kananya revealed that these moderators, based at a facility in Kenya contracted by social media companies, worked long hours and were diagnosed with PTSD, generalized anxiety disorder (GAD), and major depressive disorder (MDD) by the Head of Mental Health Services at Kenyatta National Hospital in Nairobi.
A lawsuit filed against Meta, Facebook’s parent company, and the outsourcing company Samasource Kenya, which employed moderators from across Africa, brought to light the distressing experiences faced by these employees.
Images and videos depicting disturbing content caused some moderators to have physical and emotional reactions such as fainting, vomiting, screaming, and leaving their workstations.
The lawsuit sheds light on the toll that moderating such content takes on individuals in regions where social media usage is on the rise, often in impoverished areas.
Many of the moderators in question turned to substance abuse, experienced relationship breakdowns, and felt disconnected from their families, due to the nature of their work.
Facebook and other tech giants use content moderators to enforce community standards and train AI systems to do the same, outsourcing this work to countries like Kenya.
A medical report submitted to the court depicted a bleak working environment where moderators were constantly exposed to distressing images in a cold, brightly lit setting.
The majority of the affected moderators suffered from PTSD, GAD, or MDD, with severe symptoms affecting a significant portion of them, even after leaving their roles.
MetaSource and Samasource declined to comment on the allegations due to the ongoing litigation.
Foxglove, a nonprofit supporting the lawsuit, highlighted the lifelong impact that this work has had on the mental health of the moderators.
The lawsuit aims to hold the companies accountable for the traumatic experiences endured by the moderators in the course of their duties.
Content moderation tasks, though often overlooked, can have significant long-term effects on the mental health of those involved, as seen in this case.
Meta stresses the importance of supporting its content moderators through counseling, training, on-site support, and access to healthcare, while implementing measures to reduce exposure to graphic material.
Source: www.theguardian.com
