The developer of ChatGPT indicated that the tragic suicide of a 16-year-old was the result of “misuse” of its platform and “was not caused” by the chatbot itself.
These remarks were made in response to a lawsuit filed by the family of California teenager Adam Lane against OpenAI and its CEO, Sam Altman.
According to the family’s attorney, Lane took his own life in April following extensive interactions and “months of encouragement from ChatGPT.”
The lawsuit claims that the teen conversed with ChatGPT about suicide methods multiple times, with the chatbot advising him on the viability of suggested methods, offering assistance in writing a suicide note to his parents, and that the specific version of the technology in use was “rushed to market despite evident safety concerns.”
In a legal document filed Tuesday in California Superior Court, OpenAI stated that, should any ’cause’ be linked to this tragic incident, Ms. Lane’s “injury or harm was caused or contributed to, in whole or in part, directly or proximately” by his “misuse, abuse, unintended, unanticipated, and/or improper use of ChatGPT.”
OpenAI’s terms of service prohibit users from seeking advice on self-harm and include a liability clause that clarifies “the output will not be relied upon as the only source of truthful or factual information.”
Valued at $500 billion (£380 billion), OpenAI expressed its commitment to “address mental health-related litigation with care, transparency, and respect,” stating it “remains dedicated to enhancing our technology in alignment with our mission, regardless of ongoing litigation.”
“We extend our heartfelt condolences to the Lane family, who are facing an unimaginable loss. Our response to these allegations includes difficult truths about Adam’s mental health and living circumstances.”
“The original complaint included selectively chosen excerpts from his chats that required further context, which we have provided in our response. We opted to limit the confidential evidence publicly cited in this filing, with the chat transcripts themselves sealed and submitted to the court.”
Jay Edelson, the family’s attorney, described OpenAI’s response as “alarming,” accusing the company of “inexplicably trying to shift blame onto others, including arguing that Adam violated its terms of service by utilizing ChatGPT as it was designed to function.”
Earlier this month, OpenAI faced seven additional lawsuits in California related to ChatGPT, including claims that it acted as a “suicide coach.”
A spokesperson for the company remarked, “This situation is profoundly heartbreaking, and we’re reviewing the filings to grasp the details. We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and direct individuals to real-world support.”
In August, OpenAI announced it would enhance safeguards for ChatGPT, stating that long conversations might lead to degradation of the model’s safety training.
“For instance, while ChatGPT may effectively direct someone to a suicide hotline at the onset of such discussions, extended messaging over time might yield responses that breach our safety protocols,” the report noted. “This is precisely the type of failure we are actively working to prevent.”
Source: www.theguardian.com
