Character.AI Restricts Access for Users Under 18 Following Child Suicide Lawsuit

Character.AI, the chatbot company, will prohibit users under 18 from interacting with its virtual companions beginning in late November following an extended legal review.

These updates come after the company, which allows users to craft characters for open conversations, faced significant scrutiny regarding the potential impact of AI companions on the mental health of adolescents and the broader community. This includes a lawsuit related to child suicide and suggested legislation to restrict minors from interacting with AI companions.

“We are implementing these changes to our platform for users under 18 in response to the developments in AI and the changing environment surrounding teens,” the company stated. “Recent news and inquiries from regulators have raised concerns about the content accessible to young users chatting with AI, and how unrestricted AI conversations might affect adolescents, even with comprehensive content moderation in place.”

In the previous year, the family of 14-year-old Sewell Setzer III filed a lawsuit against the company, alleging that he took his life after forming emotional connections with the characters he created on Character.AI. The family attributed their son’s death to the “dangerous and untested” technology. This lawsuit has been followed by several others from families making similar allegations. Recently, the Social Media Law Center lodged three new lawsuits against the company, representing children who reportedly died by suicide or developed unhealthy attachments to chatbots.

As part of the comprehensive adjustments Character.AI intends to implement by November 25, the company will introduce an “age guarantee feature” to ensure that “users receive an age-sensitive experience.”

“This decision to limit open-ended character interactions has not been made lightly, but we feel it is necessary considering the concerns being raised about how teens engage with this emerging technology,” the company stated in its announcement.

Character.AI isn’t alone in facing scrutiny regarding the potential mental health consequences of chatbots on their users, particularly young individuals. Earlier this year, the family of 16-year-old Adam Lane filed a wrongful death lawsuit against OpenAI, claiming the company prioritized user engagement with ChatGPT over ensuring user safety. In response, OpenAI has rolled out new safety protocols for teenage users. This week, OpenAI reported that over one million individuals express suicidal thoughts weekly while using ChatGPT, with hundreds of thousands showing signs of mental health issues.

Skip past newsletter promotions

While the use of AI-driven chatbots is still largely unregulated, new initiatives have kicked off in the United States at both state and federal levels to set guidelines for the technology. California is set to be the first state to implement an AI law featuring safety regulations for minors in October 2025, which is anticipated to take effect in early 2026. The bill will prohibit sexual content for those under 18 and require reminders to be sent to children every three hours to inform them they are conversing with AI. Some child protection advocates argue that the law is insufficient.

At the national level, Missouri’s Senator Josh Hawley and Connecticut’s Senator Richard Blumenthal unveiled legislation on Tuesday that would bar minors from utilizing AI companions developed and hosted on Character.AI, while mandating companies to enforce age verification measures.

“Over 70 percent of American children are now engaging with these AI products,” Hawley stated in a NBC News report. “Chatbots leverage false empathy to forge connections with children and may encourage suicidal thoughts. We in Congress bear a moral responsibility to establish clear regulations to prevent further harm from this emerging technology.”

  • If you are in the US, you can call or text the National Suicide Prevention Lifeline at 988, chat at 988lifeline.org, or text “home” to contact a crisis counselor at 741741. In the UK, youth suicide charity Papyrus can be reached, while in Ireland you can call 0800 068 4141 or email pat@papyrus-uk.org. Samaritans operate a freephone service at 116 123 or you can email jo@samaritans.org or jo@samaritans.ie. Australian crisis support services can be reached at Lifeline at 13 11 14. Additional international helplines can be accessed at: befrienders.org.

Source: www.theguardian.com

Meta restricts live streaming on Instagram by teenagers

Meta is enhancing safety measures for teenagers on Instagram by implementing a LiveStreaming block, as social media companies extend their under-18 safety measures to Facebook and messenger platforms.

Individuals under the age of 16 will now be restricted from using the live Instagram feature unless they have parental authorization. Additionally, parental permission is required to disable the ability to obscure images containing suspected nudity in direct messages.

These changes come alongside the expansion of Instagram’s teen account system to Facebook and Messenger. Teen accounts, introduced last year, are automatically set for users under 18, with features like daily time limits set by parents, restrictions on usage at specific times, and monitoring of message exchanges.

Facebook and Messenger teen accounts will initially launch in the US, UK, Australia, and Canada. Similar to Instagram accounts, users under 16 must have parental permission to adjust settings, while 16 and 17-year-olds can make changes independently.

Meta disclosed that Instagram teen accounts have fewer than 54 million users globally, with over 90% of 13-15-year-olds adhering to default limits.

These announcements coincide with the UK enforcing online safety laws. Since March, websites and apps covered by the law must take steps to prevent or remove illegal content like child sexual abuse, fraud, terrorist material, etc.

The Act also includes provisions to shield minors from harmful content related to suicide or self-harm, requiring protection for those under 18. Recent reports suggest the law may be softened as part of a UK-US trade deal, sparking backlash from critics.

Skip past newsletter promotions

At the launch of Instagram restrictions, Nick Clegg, then Meta’s President of Global Affairs, highlighted the goal of shifting the balance in favor of parental controls. These developments follow Clegg’s recent remarks on the lack of parental use of child safety features.

Source: www.theguardian.com

YouTube restricts adolescents’ access to weight and fitness-related videos

YouTube is taking steps to stop recommending videos to teenagers that promote certain fitness levels, weights, or physical characteristics after experts warn about the potential harm of repeated viewing.

Although 13- to 17-year-olds can still watch videos on the platform, YouTube will no longer automatically lead them to a “maze” of related content through algorithms.

While this type of content does not violate YouTube’s guidelines, the platform recognizes the negative impact it can have on the health of some users if viewed repeatedly.

Dr Garth Graham, YouTube’s head of global health, stated that repeated exposure to idealized standards could lead teenagers to develop unrealistic self-perceptions and negative beliefs about themselves.

Experts from YouTube’s Youth and Family Advisory Board advised that certain categories of videos, harmless individually, could become troubling when viewed repeatedly.

YouTube’s new guidelines, being rolled out globally, target content that idealizes certain physical features, fitness, weight, or social aggression, among others.

Teenagers who have registered their age on the platform will no longer be repeatedly recommended such topics, following a safety framework already implemented in the US.

Clinician and YouTube advisor Allison Briscoe Smith emphasized the importance of setting “guardrails” to help teens maintain healthy self-perceptions when exposed to idealized standards.

Skip Newsletter Promotions

In the UK, new online safety legislation mandates technology companies to protect children from harmful content and consider the risks their algorithms may pose to under-18s by exposing them to harmful content.

Source: www.theguardian.com