Social media platforms continue to disseminate content related to depression, suicide, and self-harm among teenagers, despite the introduction of new online safety regulations designed to safeguard children.
The Molly Rose Foundation created a fake account pretending to be a 15-year-old girl and interacted with posts concerning suicide, self-harm, and depression. This led to the algorithm promoting accounts filled with a “tsunami of harmful content on Instagram reels and TikTok pages,” as detailed in the charity’s analysis.
An alarming 97% of recommended videos viewed on Instagram reels and 96% on TikTok were found to be harmful. Furthermore, over half (55%) of TikTok’s harmful recommended posts included references to suicide and self-harm, while 16% contained protective references to users.
These harmful posts garnered substantial viewership. One particularly damaging video was liked over 1 million times on TikTok’s For You Page, and on Instagram reels, one in five harmful recommended videos received over 250,000 likes.
Andy Burrows, CEO of The Molly Rose Foundation, stated: “Persistent algorithms continue to bombard teenagers with dangerous levels of harmful content. This is occurring on a massive scale on the most popular platforms among young users.”
“In the two years since our last study, it is shocking that the magnitude of harm has not been adequately addressed, and that risks have been actively exacerbated on TikTok.
“The measures instituted by Ofcom to mitigate algorithmic harms are, at best, temporary solutions and are insufficient to prevent preventable damage. It is crucial for governments and regulators to take decisive action to implement stronger regulations that platforms cannot overlook.”
Researchers examining platform content from November 2024 to March 2025 discovered that while both platforms permitted teenagers to provide negative feedback on content, as required by Ofcom under the online safety law, this function also allowed for positive feedback on the same material.
The Foundation’s Report, developed in conjunction with Bright Data, indicates that while the platform has made strides to complicate the use of hashtags for searching hazardous content, it still amplifies harmful material through personalized AI recommendation systems once monitored. The report further observed that platforms often utilize overly broad definitions of harm.
This study provided evidence linking exposure to harmful online content with increased risks of suicide and self-harm.
Additionally, it was found that social media platforms profited from advertisements placed next to numerous harmful posts, including those from fashion and fast food brands popular among teenagers as well as UK universities.
Ofcom has initiated the implementation of child safety codes in accordance with online safety laws aimed at “taming toxic algorithms.” The Molly Rose Foundation, which receives funding from META, expresses concern that regulators propose a mere £80,000 for these improvements.
A spokesperson for Ofcom stated, “Changes are underway. Since this study was conducted, new measures have been introduced to enhance online safety for children. These will make a significant difference, helping to prevent exposure to the most harmful content, including materials related to suicide and self-harm.”
Technology Secretary Peter Kyle mentioned that 45 sites have been under investigation since the enactment of the online safety law. “Ofcom is also exploring ways to strengthen existing measures, such as employing proactive technologies to protect children from self-harm and recommending that platforms enhance their algorithmic safety,” he added.
A TikTok spokesperson commented: “TikTok accounts for teenagers come equipped with over 50 safety features and settings that allow for self-expression, discovery, and learning while ensuring safety. Parents can further customize content and privacy settings for their teens through family pairing.”
A Meta spokesperson stated: “I dispute the claims made in this report, citing its limited methodology.
“Millions of teenagers currently use Instagram’s teenage accounts, which offer built-in protections that limit who can contact them, the content they can see, and their time spent on Instagram. Our efforts to utilize automated technology continue in order to remove content that promotes suicide and self-harm.”
Source: www.theguardian.com












