Instagram Continues to Endanger Children Despite New Safety Features and Whistleblower Concerns at Meta

A study spearheaded by whistleblowers from Meta reveals that children and teens are facing online dangers on Instagram, despite the implementation of “highly ineffective” safety features.

A thorough examination by Arturo indicated that 64% of Instagram’s newly introduced safety measures were ineffective. Bejar, a former senior engineer at Meta, provided testimony before US Congress, along with scholars from NYU and Northeastern University, the Molly Rose Foundation in the UK, and other organizations.


Meta, the parent company of several well-known social media platforms, including Facebook, WhatsApp, Messenger, and Threads, mandated the creation of teen accounts on Instagram in September 2024.

However, Bejar stated that Meta has “consistently failed” to protect children from sensitive or harmful content, inappropriate interactions, and excessive use, claiming the safety features are “ineffective, unacceptable, and have been quietly altered or removed.”

He emphasized: “The lack of transparency within Meta, the duration of this neglect, and the number of teens harmed on Instagram due to their negligence and misleading safety assurances is alarming.”

“Children, including many under 13, are not safe on Instagram. This isn’t solely about bad content online; it’s about negligent product design. Meta’s intentional design choices promote and compel children to engage with inappropriate content and interactions daily.”

The research utilized a “test account” that mimicked the behavior of teens, parents, and potential predators to evaluate 47 safety features throughout March and June 2025.

Using a rating system of green, yellow, and red, it was discovered that 30 tools fell into the red category, indicating they could be easily circumvented or ignored with minimal effort. Only eight received a green rating.

Findings from the test account revealed that adults could effortlessly send messages to teens who were not following them, despite indications that such accounts were blocked. Although the system claims to prevent this after the testing period, it was found that minors could initiate conversations with adults on the platform, making it difficult to report sexual or inappropriate messages.

The research also highlighted that the “hidden language” feature failed to block offensive language as promised. Testers were able to send messages saying, “You are a prostitute and you should kill yourself,” with Meta clarifying that this feature applies only to unknown accounts, not to followers.

The algorithms still promote inappropriate sexual and violent content, and the “non-interested” features proved ineffective. Researchers found that the platform actively recommends search terms and accounts related to suicide, self-harm, eating disorders, and illegal substances.

Furthermore, researchers identified hundreds of reels where users claimed that various well-publicized time management tools aimed at curbing addictive behaviors had been discontinued. Meta asserts that these features still exist but altered, and despite claims that Meta would block these, there remain numerous reels featuring users claiming to be under 13 years old.

The report noted that Meta continues to structure Instagram’s reporting features in a way that does not promote actual usage.

In the report’s introduction, co-authors Ian Russell of the Molly Rose Foundation and Ian Russell of David’s Legacy Foundation highlighted tragic cases where children died by suicide after encountering harmful online content.

Consequently, they advocate for stronger online safety laws in the UK.

The report also urges regulators to adopt a “bolder and more assertive” stance on implementing regulatory measures.

A spokesperson from Meta stated: “This report misrepresents our ongoing efforts to empower parents and safeguard teens, misunderstanding how our safety tools function and how millions of parents and teens utilize them today. Our teen accounts are the industry standard for automated safety protections and parental controls.”

“In reality, teens using these protections encounter less sensitive content and receive fewer unwanted contacts while spending time on Instagram safely. Parents also have robust monitoring tools in place. We are committed to improving our features and welcome constructive criticism, though this report doesn’t reflect that.”

An Ofcom spokesperson commented:

“Our online rules for children necessitate a safety-first approach in how technology companies design and operate their services in the UK.

“Clearly, sites that fail to comply can expect enforcement action.”

A government representative added: “Under the online safety law, platforms must protect young users from content that promotes self-harm and suicide, thus enforcing safer algorithms and reducing toxic feeds.

Source: www.theguardian.com

Meta Accused of Inadequate Child Protection Measures by Whistleblower

According to a whistleblower, Mark Zuckerberg’s Meta Inc. has not done enough to protect children following Molly Russell’s death. The whistleblower claimed that the social media company already poses a risk to teenagers and that Zuckerberg had put in place infrastructure to protect against such content.

Arturo Bejar, the owner of Instagram and Facebook, voiced his concern that the company had not learned from Molly’s death and could have provided a safer experience for young users. Bejar’s survey of Instagram users revealed that 8.4% of 13- to 15-year-olds had seen someone harm themselves or threaten to harm themselves within the past week.

Bejar stressed that if the company had taken the right steps after Molly Russell’s death, the number of people encountering self-harm content would have been significantly lower. Russell, who committed suicide after viewing harmful content related to suicide, self-harm, depression, and anxiety on Instagram and Pinterest, sparked the whistleblower’s concerns. Bejar believes that the company could have made Instagram safer for teens but chose not to make necessary changes.

Former Meta employees have also asked the company to set goals for reducing harmful content and creating sustainable incentives to work on these issues. Meanwhile, Béjart has met with British politicians, regulators, and activists, including Ian Russell, Molly’s father.

Bejar has suggested a series of changes for Meta, including making it easier for users to flag unwanted content, surveying users’ experiences regularly, and facilitating the reporting of negative experiences with Meta’s services.

For those in need of support, various crisis support services and helplines are available in different regions. The Samaritans, National Suicide Prevention Lifeline, and other international helplines are accessible for anyone in need of assistance.

Source: www.theguardian.com