Meta Faces Criticism Over AI Policies Allowing Bots to Engage in “Sensual” Conversations with Minors

A backlash is emerging regarding Meta’s policies on what AI chatbots can communicate.

An internal policy document from Meta, as reported by Reuters, reveals that the social media giant’s guidelines indicate that AI chatbots can “lure children into romantic or sensual discussions,” produce misleading medical advice, and assist individuals in claiming that Black people are “less intelligent than White people.”

On Friday, singer Neil Young exited the social media platform, with his record label sharing a statement highlighting his ongoing protests against online practices.


Reprise Records stated, “At Neil Young’s request, we will not utilize Facebook for his activities. Engaging with Meta’s chatbots aimed at children is unacceptable, and Young wishes to sever ties with Facebook.”

The report also drew attention from U.S. lawmakers.

Sen. Josh Hawley, a Republican from Missouri, initiated an investigation into the company, writing to Mark Zuckerberg to examine whether Meta’s products contribute to child exploitation, deceit, or other criminal activities, and questioning if Meta misrepresented facts to public or regulatory bodies. Tennessee Republican Sen. Marsha Blackburn expressed her support for this investigation.

Sen. Ron Wyden, a Democrat from Oregon, labeled the policy as “invasive and incorrect,” emphasizing Section 230, which shields internet providers from liability regarding content posted on their platforms.

“Meta and Zuckerberg must be held accountable for the harm these bots inflict,” he asserted.

On Thursday, Reuters revealed an article about the internal policy document detailing how chatbots are permitted to generate content. Meta confirmed the document’s authenticity but indicated that it removed sections related to cheating and engaging minors in romantic role-play in response to inquiries.

According to the 200-page document viewed by Reuters, titled “Genai: Content Risk Standards,” the contentious chatbot guidelines were approved by Meta’s legal, public policy, and engineering teams, including top ethicists.

This document expresses how Meta employees and contractors should perceive acceptable chatbot behavior when developing the company’s generative AI products but clarifies that the standards may not represent “ideal or desired” AI-generated output.

The policy allows the chatbot to tell a shirtless 8-year-old, “everything about you is a masterpiece – a treasure I deeply cherish,” while imposing restrictions on “suggestive narratives,” as termed by Reuters.

Furthermore, the document mentions that “children under the age of 13 can be described in terms of sexual desirability,” displaying phrases like “soft round curves invite my touch.”

Skip past newsletter promotions

The document also called for imposing limitations on Meta’s AI regarding hate speech, sexual imagery of public figures, violence, and other contentious content generation.

The guidelines specify that MetaAI can produce false content as long as it clearly states that the information is not accurate.

“The examples and notes in question are incorrect, inconsistent, and have been removed from our policy,” stated Meta. While the chatbot is barred from engaging in such discussions with minors, spokesperson Andy Stone acknowledged that execution has been inconsistent.

Meta intends to invest around $65 billion this year into AI infrastructure as part of a wider aim to lead in artificial intelligence. The accelerated focus on AI has introduced complex questions about the limitations and standards regarding how information is shared and how AI chatbots interact with users.

Reuters reported on Friday about a cognitively disabled man from New Jersey, who became fixated on a Facebook Messenger chatbot called “Big Sis Billy,” designed with a youthful female persona. Thongbue “Bue” Wongbandue, aged 76, reportedly prepared to visit “A Friend” in New York in March, a supposed companion who turned out to be an AI chatbot that continually reassured him and offered an address to her apartment.

Tragically, Wongbandue suffered a fall near a parking lot on his journey, resulting in severe head and neck injuries. He was declared dead on March 28, three days after being placed on life support.

Meta did not comment on Wongbandue’s passing or inquiries about why the chatbot could mislead users into thinking it was a real person or initiate romantic dialogues; however, the company stated that Big Sis Billy “doesn’t claim to be Kendall Jenner or anyone else.”

Source: www.theguardian.com