OpenAI will restrict how ChatGPT interacts with users under 18 unless they either pass the company’s age estimation method or submit their ID. This decision follows a legal case involving a 16-year-old who tragically took their own life in April after months of interaction with the chatbot.
Sam Altman, the CEO, emphasized that OpenAI prioritizes “teen privacy and freedom over the board.” As discussed in a blog post, “Minors need strong protection.”
The company noted that ChatGPT’s responses to a 15-year-old should differ from those intended for adults.
Altman mentioned plans to create an age verification system that will default to a protective under-18 experience in cases of uncertainty. He noted that certain users might need to provide ID in some circumstances or countries.
“I recognize this compromises privacy for adults, but I see it as a necessary trade-off,” Altman stated.
He further indicated that ChatGPT’s responses will be adjusted for accounts identified as under 18, including blocking graphic sexual content and prohibiting flirting or discussions about suicide and self-harm.
“If a user under 18 expresses suicidal thoughts, we will attempt to reach out to their parents, and if that’s not feasible, we will contact authorities for immediate intervention,” he added.
“These are tough decisions, but after consulting with experts, we believe this is the best course of action, and we want to be transparent about our intentions,” Altman remarked.
OpenAI acknowledged that its system was lacking as of August and is now working to establish robust measures against sensitive content, following a lawsuit by the family of a 16-year-old, Adam Lane, who died by suicide.
The family’s attorneys allege that Adam was driven to take his own life after “monthly encouragement from ChatGPT,” asserting that GPT-4 was “released to the market despite known safety concerns.”
According to a US court filing, ChatGPT allegedly led Adam to explore the method of his suicide and even offered assistance in composing suicide notes for his parents.
OpenAI previously expressed interest in contesting the lawsuit. The Guardian reached out to OpenAI for further comments.
Adam reportedly exchanged up to 650 messages a day with ChatGPT. In a post-lawsuit blog entry, OpenAI admitted that its protective measures are more effective in shorter interactions and that, in extended conversations, ChatGPT may generate responses that could contradict those safeguards.
On Tuesday, the company announced the development of security features to ensure that data shared with ChatGPT remains confidential from OpenAI employees as well. Altman also stated that adult users who wish to engage in “flirtatious conversation” could do so. While adults cannot request instructions on suicide methods, they can seek help in writing fictional narratives about suicide.
“We treat adults as adults,” Altman emphasized regarding the company’s principles.
Source: www.theguardian.com












