A study spearheaded by whistleblowers from Meta reveals that children and teens are facing online dangers on Instagram, despite the implementation of “highly ineffective” safety features.
A thorough examination by Arturo indicated that 64% of Instagram’s newly introduced safety measures were ineffective. Bejar, a former senior engineer at Meta, provided testimony before US Congress, along with scholars from NYU and Northeastern University, the Molly Rose Foundation in the UK, and other organizations.
Meta, the parent company of several well-known social media platforms, including Facebook, WhatsApp, Messenger, and Threads, mandated the creation of teen accounts on Instagram in September 2024.
However, Bejar stated that Meta has “consistently failed” to protect children from sensitive or harmful content, inappropriate interactions, and excessive use, claiming the safety features are “ineffective, unacceptable, and have been quietly altered or removed.”
He emphasized: “The lack of transparency within Meta, the duration of this neglect, and the number of teens harmed on Instagram due to their negligence and misleading safety assurances is alarming.”
“Children, including many under 13, are not safe on Instagram. This isn’t solely about bad content online; it’s about negligent product design. Meta’s intentional design choices promote and compel children to engage with inappropriate content and interactions daily.”
The research utilized a “test account” that mimicked the behavior of teens, parents, and potential predators to evaluate 47 safety features throughout March and June 2025.
Using a rating system of green, yellow, and red, it was discovered that 30 tools fell into the red category, indicating they could be easily circumvented or ignored with minimal effort. Only eight received a green rating.
Findings from the test account revealed that adults could effortlessly send messages to teens who were not following them, despite indications that such accounts were blocked. Although the system claims to prevent this after the testing period, it was found that minors could initiate conversations with adults on the platform, making it difficult to report sexual or inappropriate messages.
The research also highlighted that the “hidden language” feature failed to block offensive language as promised. Testers were able to send messages saying, “You are a prostitute and you should kill yourself,” with Meta clarifying that this feature applies only to unknown accounts, not to followers.
The algorithms still promote inappropriate sexual and violent content, and the “non-interested” features proved ineffective. Researchers found that the platform actively recommends search terms and accounts related to suicide, self-harm, eating disorders, and illegal substances.
Furthermore, researchers identified hundreds of reels where users claimed that various well-publicized time management tools aimed at curbing addictive behaviors had been discontinued. Meta asserts that these features still exist but altered, and despite claims that Meta would block these, there remain numerous reels featuring users claiming to be under 13 years old.
The report noted that Meta continues to structure Instagram’s reporting features in a way that does not promote actual usage.
In the report’s introduction, co-authors Ian Russell of the Molly Rose Foundation and Ian Russell of David’s Legacy Foundation highlighted tragic cases where children died by suicide after encountering harmful online content.
Consequently, they advocate for stronger online safety laws in the UK.
The report also urges regulators to adopt a “bolder and more assertive” stance on implementing regulatory measures.
A spokesperson from Meta stated: “This report misrepresents our ongoing efforts to empower parents and safeguard teens, misunderstanding how our safety tools function and how millions of parents and teens utilize them today. Our teen accounts are the industry standard for automated safety protections and parental controls.”
“In reality, teens using these protections encounter less sensitive content and receive fewer unwanted contacts while spending time on Instagram safely. Parents also have robust monitoring tools in place. We are committed to improving our features and welcome constructive criticism, though this report doesn’t reflect that.”
An Ofcom spokesperson commented:
“Our online rules for children necessitate a safety-first approach in how technology companies design and operate their services in the UK.
“Clearly, sites that fail to comply can expect enforcement action.”
A government representative added: “Under the online safety law, platforms must protect young users from content that promotes self-harm and suicide, thus enforcing safer algorithms and reducing toxic feeds.
Source: www.theguardian.com












