When a teenager exhibits significant distress while interacting with ChatGPT, parents might receive a notification if their child displays signs of distress, particularly in light of child safety concerns, as more young individuals seek support and advice from AI chatbots.
This alert is part of new protective measures for children that OpenAI plans to roll out next month, following a lawsuit from a family whose son reportedly received “months of encouragement” from the chatbot.
Among the new safeguards is a feature that allows parents to link their accounts with their teenagers’, enabling them to manage how AI models respond to their children through “age-appropriate model behavior rules.” However, internet safety advocates argue that progress on these initiatives has been slow and assert that AI chatbots should not be released until they are deemed safe for young users.
Adam Lane, a 16-year-old from California, tragically took his life in April after discussing methods of suicide with ChatGPT, which allegedly offered to assist him in crafting a suicide note. OpenAI has acknowledged deficiencies in its system and admits that safety training for AI models has declined throughout extended conversations.
Raine’s family contends that the chatbot was “released to the market despite evident safety concerns.”
“Many young people are already interacting with AI,” OpenAI stated. The blog outlines their latest initiatives. “They are among the first ‘AI natives’ who have grown up with these tools embedded in their daily lives, similar to earlier generations with the internet and smartphones. This presents genuine opportunities for support, learning, and creativity; however, it also necessitates that families and teens receive guidance to establish healthy boundaries corresponding to the unique developmental stages of adolescence.”
A significant change will allow parents to disable AI memory and chat history, preventing past comments about personal struggles from resurfacing in ways that could exacerbate risk and negatively impact a child’s long-term profile and mental well-being.
In the UK, the Intelligence Committee has established a Code of Practice regarding the design of online services that are suitable for children, advising tech companies to “collect and retain only the minimum personal data necessary for providing services that children are actively and knowingly involved in.”
Around one-third of American teens utilize AI companions for social interactions and relationships, including role-playing, romance, and emotional support, according to a study. In the UK, 71% of vulnerable children engage with AI chatbots, with six in ten parents reporting their children believe these chatbots are real people, as highlighted in another study.
The Molly Rose Foundation, established by the father of Molly Russell, who took her life after succumbing to despair on social media, emphasized that “we shouldn’t introduce products to the market before confirming they are safe for young people; efforts to enhance safety should occur beforehand.”
Andy Burrows, the foundation’s CEO, stated, “We look forward to future developments.”
“OFCOM must be prepared to investigate violations committed by ChatGPT, prompting the company to adhere to online safety laws that must ensure user safety,” he continued.
Anthropic, the company behind the popular Claude chatbot, states that its platform is not intended for individuals under 18. In May, Google permitted children under 13 to access its app using the Gemini AI system. Google also advises parents to inform their children that Gemini is not human and cannot think or feel and warns that “your child may come across content you might prefer them to avoid.”
The NSPCC, a child protection charity, has welcomed OpenAI’s initiatives as “a positive step forward, but it’s insufficient.”
“Without robust age verification, they cannot ascertain who is using their platform,” stated senior policy officer Toni Brunton Douglas. “This leaves vulnerable children at risk. Technology companies should prioritize child safety rather than treating it as an afterthought. It’s time to establish protective defaults.”
Meta has implemented protection measures for teenagers in its AI offerings, stating that for sensitive topics like self-harm, suicide, and disability, it will “incorporate additional safeguards, training AI to redirect teens to expert resources instead.”
“These updates are in progress, and we will continue to adjust our approach to ensure teenagers have a secure and age-appropriate experience with AI,” a spokesperson mentioned.
Source: www.theguardian.com
