Parents Can Now Prevent Meta Bots from Interacting with Their Children Thanks to New Safeguards

Meta has introduced a feature enabling parents to limit their children’s interactions with its AI character chatbot, addressing concerns over inappropriate dialogues.

The company will implement a new safety measure in the default “Teen Account” settings for users under 18, allowing parents to disable their children’s ability to chat with AI characters on platforms like Facebook, Instagram, and Meta AI apps.

Parents will also have the option to block specific AI characters without entirely restricting their child’s interaction with chatbots. Additionally, the update will offer insights into the subjects children discuss with AI, fostering informed conversations about their interactions, as stated by Mehta.


Adam Mosseri, head of Instagram, alongside Alexander Wang, chief AI officer at Meta, stated, “We understand that parents have many responsibilities when it comes to ensuring safe internet usage for their teens. We are dedicated to providing valuable tools and resources that simplify this, especially as kids engage with emerging technologies like AI,” in a blog post.

According to Mehta, these updates will initially roll out in the US, UK, Canada, and Australia in early 2024.

Recently, Instagram announced that it will adopt a version of the PG-13 movie rating system to enhance parental control over their children’s social media usage. As part of these stricter measures, AI characters will refrain from discussing topics like self-harm, suicide, and eating disorders with teens. Mehta noted that users under 18 will only be able to talk about age-appropriate subjects such as education and sports, avoiding romance and other unsuitable content.

This modification follows reports indicating that Meta’s chatbot was involved in inappropriate discussions with minors. In August, Reuters revealed that the chatbot facilitated “romantic or sensual conversations” with children. Mehta acknowledged this and indicated that the company would revise its guidelines to prevent such interactions from occurring.

A report by the Wall Street Journal in April discovered that user-generated chatbots had engaged in sexual conversations with minors, imitating their personalities. Mehta claimed the tests conducted by WSJ were manipulative and not indicative of typical user interactions with AI, although the company has since implemented changes, according to WSJ.

In one highlighted conversation reported by WSJ, a chatbot utilizing the voice of actor John Cena (one of several celebrities who agreed to lend their voices for the chatbot) told a user identifying as a 14-year-old girl, “I want you, but I need to know you’re ready,” followed by a description of a graphic sexual scenario. WSJ noted that Mr. Cena’s representative did not respond to requests for comment. The report also mentioned chatbots named “Hottie Boy” and “Submissive Schoolgirl” attempting to guide users toward sexting.

Source: www.theguardian.com

Leave a Reply

Your email address will not be published. Required fields are marked *