Instagram boss Adam Mosseri has announced that his employees, even as parent company Meta Inc. faces increased legal scrutiny over concerns that the popular social media app is harming young users, have reportedly prevented or weakened the implementation of youth safety features. Mosseri, whose name frequently appears in a high-profile lawsuit brought by 33 states accusing Meta of having addictive features in its apps that harm the mental health of young people, reportedly ignored “pressure from employees” to install some of its safety features as default settings for Instagram users. According to the information.
Meta-owned Instagram and Facebook say their use is fueling a number of worrying trends among young people, including an increase in depression, anxiety, insomnia, body image issues, and eating disorders. This claim has drawn criticism from critics.
Despite this, Instagram executives have rejected pressure from members of the company’s “welfare team” to include app features that encourage users to stop comparing themselves to others, according to three former employees with knowledge of the details. The feature was implemented despite Mosseri himself acknowledging in an internal email that he considered “social comparisons” to be “an existential problem facing Instagram” and that “social comparisons are for Instagram.” It wasn’t done. [what] According to the state’s complaint, the election interference is against Facebook.
Additionally, a Mosseri-backed feature that addresses the “social comparison” problem by hiding Instagram like counts will eventually be “watered down” and an option that users can manually enable. The report states that this has been set up.
Internally, some employees have reportedly pointed out that the “like hiding” tool would hurt engagement in the app, resulting in less advertising revenue.
While some sources praised Mosseri’s efforts to promote youth safety, one told the magazine that Instagram has a pattern of making such features optional rather than automatically implementing them. There was also
A Meta spokesperson did not specifically answer questions about why the company rejected proposals for tools to combat problems arising from social comparison issues.
“We don’t know what triggers a particular individual to compare themselves to others, so we give people the tools to decide for themselves what they do and don’t want to see on Instagram. ,” a Meta spokesperson told the publication.
Mehta did not immediately respond to a request for comment from the Post.
Elsewhere, Mosseri allegedly objected to the use of a tool that automatically blocks offensive language in direct message requests. The reason for this, The Information reported, citing two former employees, was “because we thought it might prevent legitimate messages from being sent.”
Finally, Instagram approved an optional “filter” feature in 2021, allowing users to block the company’s curated list of offensive words or compile their own list of offensive phrases and emojis they’d like to block. I made it possible.
The move reportedly infuriated safety staff, including former Meta engineer Arturo Bejar. They believed that people of color should not be forced to confront offensive language in order to address the problem. In November, Mr. Behar testifies before Senate committee About harmful content on Instagram.
“I returned to Instagram with the hope that Adam would be proactive about addressing these issues, but there was no evidence of that in the two years I was there,” Bejart said, initially starting Meta in 2015. He retired in 2007 and returned to a safety management role. the team told the outlet in 2019.
Meta pushed back against the report, saying Instagram has implemented a series of safety defaults for teen users, including blocking adults 19 and older from sending direct messages to teen accounts that don’t follow them. It was pointed out that the function has been introduced.
For example, Meta said its tool called “Hidden Words,” which hides offensive phrases and emojis, will be enabled by default for teens starting in 2024. The company said it has announced more than 20 policies regarding teen safety since Mosseri took over Instagram. 2018.
Mosseri echoed this, writing that further investments in platform security would “strengthen our business.”
“If teens come to Instagram and feel bullied, receive unwanted advances, or see content that makes them uncomfortable, they will leave and go to a competitor.” said Mosseri. “I know how important this work is, and I know that my leadership will be determined by how much progress we make in this work. I look forward to continuing to do more.” Masu.”
Mosseri was one of several meth executives who came under scrutiny as part of a major lawsuit filed in October by a coalition of 33 state attorneys general. The lawsuit claimed in part that Meta’s millions of underage Instagram users were the company’s “open secret.” The complaint includes an internal chat from November 2021 in which Mosseri appeared to acknowledge the app’s problems with underage users, saying, “Teens want access to Instagram. , who is my age and wants to get Instagram right now.”
A month later, Mosseri testified before the Senate that children under 13 “are not allowed to use Instagram.” He also told MPs that he believes online safety for young people is “very important”.
Separate from the state legal challenges, Meta is facing a separate lawsuit from New Mexico, alleging it failed to protect young people from alleged sex offenders and flooded them with adult sex material. confronting.
Source: nypost.com