Instagram’s Ongoing Commitment to Youth Safety: Will the New “PG-13” Guidelines Make a Difference?

For months, Instagram has faced challenges in persuading parents, advocates, and officials that it is a safe environment for children, despite increasing evidence indicating otherwise. Now, the platform is rolling out another safety feature intended to protect teens. Yet, given its track record, parents remain skeptical.

Beginning this week, all users under 18 will automatically be categorized for ages 13+ and their feeds will be restricted to content suitable for the U.S. PG-13 movie rating.

However, Instagram’s previous unfulfilled commitments make this latest content restriction feel like mere window dressing—an illusion of action without genuine effectiveness.

The company has accrued substantial profits while advocacy groups have long cautioned against exposing minors to inappropriate content and individuals. $100 billion annually is what it reports in profits. Meta’s own estimates suggest that about 100,000 children using Facebook and Instagram face online sexual harassment daily. This is concerning, especially considering that as of July 2020, internal communications revealed that the measures to prevent child grooming on the platform were, at best, “between zero and negligible.” The lawsuit in New Mexico claims that Meta’s social networks, including Instagram, have essentially become a haven for child predators. (Meta refutes these core allegations, claiming the lawsuit is ‘unfair’.)

Last year, the firm finally enacted mandatory Instagram accounts for teenagers. However, a recent study led by a whistleblower revealed that 64% of the new safety features designed for teens were ineffective.

Research indicates that 47% of young teen users on Instagram encounter unsafe content, and 37% of users aged 13 to 15 receive at least one unsafe piece of content or unwanted message weekly. This includes “approximately 1 in 7 users viewing self-harm content, unwanted sexual content, discriminatory content, or substance-related content every week.”

“These failures showcase a corporate culture at Meta that prioritizes engagement and profit over safety,” stated Andy Burrows, CEO of the UK’s Molly Rose Foundation, which advocates for stronger online safety legislation, as part of the investigative team. BBC reported. A spokesperson for Meta countered that the study “misrepresents our commitment to empowering parents and protecting youth, and mischaracterizes the functionality of our safety tools and their use by millions of parents and youth.”

Concurrently, measures introduced last year followed a significant moment for Meta’s public perception. In January 2024, the leaders of the world’s major social media firms were summoned to the U.S. Senate to discuss their security policies. Meta CEO Mark Zuckerberg issued an apology to parents whose children allegedly endured harm from social media.

Despite Instagram’s lengthy struggle to address these concerns, it appears to continually place children at risk, only to issue apologies afterward. On Monday, Reuters reported that it has been found in company-specific research that teens who frequently felt negative about their bodies on Instagram encountered three times more “eating disorder-related content” than their peers. Alarmingly, technology companies and social media platforms have become so entrenched in everyday life that it’s nearly impossible to engage with society without them, particularly for children.

So, what is the resolution? Primarily, we must acknowledge online spaces as extensions of the real world, rather than merely digital counterparts. Social media platforms replicate real-life violence and can cause other tangible harms, putting children at a higher risk.

It’s essential for lawmakers to require these companies to incorporate safety measures into their design processes rather than treating them as an afterthought. Equally vital is for parents to educate their children on online safety, just as they would about physical safety in public.

The technology developed by these profit-driven companies is pervasive. If we cannot rely on them to safeguard our most vulnerable users, it falls upon us to ensure our own protection.

Source: www.theguardian.com

Meta Announces PG-13 Style System for Instagram to Safeguard Children

Instagram is set to implement a PG-13 style rating system to enhance parental control over their teens’ interactions on the platform.

Owned by Meta, Instagram will introduce guidelines akin to the U.S. “Parental Guidance” movie ratings established 41 years ago for all content viewed by teen accounts. Consequently, users under 18 will automatically be categorized under the 13+ setting, with the option to opt out only with parental consent.

Currently, teen accounts restrict or prohibit sexually suggestive material, graphic images, and promotions for adult content like alcohol and tobacco. The forthcoming PG-13 framework will impose even stricter regulations.

Meta indicated that it will limit visibility on posts promoting “harmful” activities, including strong language, risky stunts, or content featuring marijuana accessories. Additionally, search terms like “alcohol” and “gore” will be blocked, even if misspelled.

Mehta commented, “While there are distinctions between movies and social media, our modifications aim to provide a teen experience within a 13+ context that parallels viewing a PG-13 film,” emphasizing the desire to communicate the policy in a familiar framework for parents.

The closest equivalent to PG-13 in British film ratings is 12A. Notably, Instagram’s new rating doesn’t impose a complete ban on nudity, similar to how PG-13/12A films like Titanic include brief nudity that isn’t explicitly sexual. Moderate violence, akin to what is found in Fast & Furious films, will also remain accessible.

This initiative follows a study by a former Meta whistleblower, revealing that 64% of new safety features on Instagram are ineffective. The assessment was conducted by Arturo Bejar, a former Meta senior engineer, alongside academics from New York University, Northeastern University, and the Molly Rose Foundation in the UK. Béjart stated, “Children are not safe on Instagram.” Mehta dismissed the findings, asserting that parents possess robust tools at their disposal.

Ofcom, the UK communications regulator, urged social media platforms to adopt a “safety-first” strategy and warned that non-compliance could lead to enforcement actions.

Mehta announced that the Instagram update will begin in the U.S., UK, Australia, and Canada, with plans to expand to Europe and globally early next year.

Activists raised concerns regarding whether these changes will effectively enhance safety.

Rowan Ferguson, policy manager at the Molly Rose Foundation, remarked: “Despite Meta’s numerous public statements, we have not received substantial safety improvements for teens, and our recent report indicates that there’s still work to be done to shield them from harmful content.”

“These additional updates will need to be evaluated for their effectiveness, which necessitates transparency from Meta for independent testing of safety features.”

Source: www.theguardian.com