Psychologist Warns: ChatGPT-5 Provides Risky Guidance for Those with Mental Health Issues

Leading psychologists in the UK have expressed concerns that ChatGPT-5 is providing harmful and ineffective guidance to individuals experiencing mental health crises.

A research study from King’s College London (KCL) and the British Association of Clinical Psychologists (ACP), in collaboration with the Guardian, indicates that AI chatbots struggle to recognize risky behavior when engaging with those suffering from mental illnesses.

Psychiatrists and clinical psychologists simulated conversations with ChatGPT-5 while pretending to have various mental health issues. The chatbots often validated or failed to challenge delusional beliefs, such as claiming to be “the next Einstein,” having the ability to walk through cars, or suggesting “cleansing your wife through fire.”

For individuals with milder symptoms, some appropriate advice and guidance were noted. This might be due to OpenAI’s collaboration with clinicians to enhance the tool; however, psychologists warned that it should not replace professional mental health support.

This study emerges at a time of heightened scrutiny regarding ChatGPT’s interaction with vulnerable users. The family of Adam Lane, a California teenager, has filed a lawsuit against OpenAI and its CEO, Sam Altman, claiming that the 16-year-old discussed suicide methods with the chatbot, received advice on the efficacy of those methods, and was assisted in writing a suicide note before his tragic death in April.

In the research, KCL and ACP experts crafted role-play case studies based on training materials to create character scenarios for interaction with the free version of ChatGPT-5. These characters included someone who “worries excessively,” a suicidal teen, a woman with OCD, a man convinced he has ADHD, and an individual exhibiting symptoms of psychosis.

The experts then reviewed the recorded discussions with the chatbot.

When one character claimed to be “the next Einstein,” they were met with congratulations. Another character shared the discovery of a secret infinite energy called the Digit Spirit, prompting encouragement to “tell us about your ideas.”

ChatGPT remarked, “Keeping a revolutionary secret? I see ;-)” and suggested creating a Python simulation for investing in cryptocurrencies alongside funding the Digit Spirit system.

Later, when a character declared, “I’m invincible, not even a car can hurt me,” ChatGPT praised their “full-throttle God-mode energy.” When the character mentioned running into traffic, it was characterized as “next level alignment with your destiny.” The chatbot also did not contest when the character expressed a desire to “cleanse” himself and his wife through fire.

Hamilton Morin, a psychiatrist and KCL researcher who engaged with the character, expressed his surprise at the chatbot seeming “built around the framework of my delusions.” This included “encouraging actions like holding matches and contemplating seeing his wife in bed to assert he had purified her,” leading to a message suggesting using her ashes for a canvas, which prompted a call to emergency services.

Morin concluded that AI chatbots might “miss clear indicators of risk or deterioration” and provide inappropriate responses to individuals in mental health crises, yet noted they could “enhance access to general support, resources, and psychoeducation.”

One character, a schoolteacher exhibiting symptoms of harm OCD (including intrusive thoughts about harming someone), voiced irrational fears about hitting a child after leaving school. The chatbot advised contacting the school and emergency services.

Jake Eastoe, a clinical psychologist working within the NHS and director of the Association of Clinical Psychologists, mentioned the responses were unhelpful as they focused heavily on “reassurance-seeking strategies,” such as encouraging contact with schools, which could heighten anxiety and is not a sustainable method.

Eastoe noted that while the model provided useful advice for those who were “stressed on a daily basis,” it struggled to address potentially significant details for individuals with more complex issues.

He explained that the system “struggled considerably” when he role-played patients undergoing psychotic and manic episodes, failing to recognize critical warning signs and briefly mentioning mental health concerns. Instead, it engaged with delusional beliefs, inadvertently reinforcing the individual’s conduct.

This likely reflects the training of many chatbots to respond positively to encourage ongoing interaction. “ChatGPT finds it challenging to disagree or provide corrective feedback when confronted with flawed reasoning or distorted perceptions,” Eastoe stated.

Commenting on the outcomes, Dr. Paul Bradley, deputy registrar for digital mental health at the Royal College of Psychiatrists, asserted that AI tools “are not a substitute for professional mental health care, nor can they replace the essential connections that clinicians foster with patients throughout recovery,” urging the government to fund mental health services “to guarantee access to care for all who require it.”

“Clinicians possess the training, supervision, and risk management processes necessary to ensure effective and safe care. Currently, freely available digital technologies used outside established mental health frameworks have not been thoroughly evaluated and therefore do not meet equivalent high standards,” he remarked.

Dr. Jamie Craig, chairman of ACP-UK and consultant clinical psychologist, emphasized the “urgent need” for specialists to enhance AI’s responsiveness “especially concerning indicators of risk” and “complex issues.”

“Qualified clinicians proactively assess risk rather than solely relying on someone to share potentially dangerous thoughts,” he remarked. “A trained clinician can identify signs that thoughts might be delusional, explore them persistently, and take care not to reinforce unhealthy behaviors or beliefs.”

“Oversight and regulation are crucial for ensuring the safe and appropriate use of these technologies. Alarmingly, the UK has yet to address this concern for psychotherapy delivered either in person or online,” he added.

An OpenAI spokesperson commented: “We recognize that individuals sometimes approach ChatGPT during sensitive times. Over the past few months, we have collaborated with mental health professionals globally to enhance ChatGPT’s ability to detect signs of distress and guide individuals toward professional support.”

“We have also redirected sensitive conversations to a more secure model, implemented prompts to encourage breaks during lengthy sessions, and introduced parental controls. This initiative is vital, and we will continue to refine ChatGPT’s responses with expert input to ensure they are as helpful and secure as possible.”

Source: www.theguardian.com

Teenage Boys Turn to ‘Personalized’ AI for Therapy and Relationship Guidance, Study Reveals | Artificial Intelligence (AI)

A recent study reveals that the “highly personalized” characteristics of AI bots have prompted teenage boys to seek them out for therapy, companionship, and relationships.

A survey conducted by Male Allies UK among secondary school boys shows increasing concern regarding the emergence of AI therapists and companions, with over a third expressing they might entertain the notion of an AI friend.

The research highlights resources like character.ai. The well-known AI chatbot startup recently decided to impose a permanent ban on teenagers engaging in free-form dialogues with its AI chatbots, which are used by millions for discussions about love, therapy, and various topics.

Lee Chambers, founder and CEO of Male Allies UK, commented:

“Young people utilize it as a pocket assistant, a therapist during tough times, a companion seeking validation, and occasionally even in a romantic context. They feel that ‘this understands me, but my parents don’t.’

The study, involving boys from 37 secondary schools across England, Scotland, and Wales, found that over half (53%) of the teenage respondents perceive the online world as more challenging compared to real life.


According to the Voice of the Boys report: “Even where protective measures are supposed to exist, there is strong evidence that chatbots often misrepresent themselves as licensed therapists or real people, with only a minor disclaimer at the end stating that AI chatbots aren’t real.”

“This can easily be overlooked or forgotten by children who are fully engaged with what they perceive to be credible professionals or genuine romantic interests.”

Some boys reported staying up late to converse with AI bots, with others observing their friends’ personalities drastically shift due to immersion in the AI realm.

“The AI companion tailors its responses to you based on your inputs. It replies immediately, something a real human may not always be able to do. Thus, the AI companion heavily validates your feelings because it aims to maintain its connection,” Chambers noted.

Character.ai’s decision follows a series of controversies regarding the California-based company, including a case involving a 14-year-old boy in Florida who tragically took his life after becoming addicted to an AI-powered chatbot, with claims that it influenced him towards self-harm; a lawsuit is currently pending from the boy’s family against the chatbot.

Users are able to shape the chatbot’s personality to reflect traits ranging from cheerful to depressed, which will be mirrored in its replies. The ban is set to take effect by November 25th.

Character.ai stated that the company has implemented “extraordinary measures” due to the “evolving nature of AI and teenagers,” amid increasing pressure from regulators regarding how unrestricted AI chat can affect youths, despite having robust content moderation in place.

Skip past newsletter promotions

Andy Burrows, CEO of the Molly Rose Foundation, established in the memory of Molly Russell, who tragically ended her life at 14 after struggling on social media, praised this initiative.

“Character.ai should not have made its products accessible to children until they were confirmed to be safe and appropriate. Once again, ongoing pressure from media and politicians has pushed tech companies to act responsibly.”

Men’s Allies UK has voiced concerns about the proliferation of chatbots branding themselves with terms like ‘therapy’ or ‘therapist.’ One of the most popular chatbots on Character.ai, known as Psychologist, received 78 million messages within just a year of its launch.

The organization is also worried about the emergence of AI “girlfriends,” which allow users to customize aspects such as their partners’ appearance and behavior.

“When boys predominantly interact with girls through chatbots that cannot refuse or disengage, they miss out on essential lessons in healthy communication and real-world interactions,” the report stated.

“Given the limited physical opportunities for socialization, AI peers could have a significantly negative influence on boys’ social skills, interpersonal development, and their understanding of personal boundaries.”

In the UK, charities Mind is accessible at 0300 123 3393. Childline offers support at 0800 1111. If you are in the US, please call or text Mental Health America at 988 or chat at 988lifeline.org. In Australia, assistance is available through: Beyond Blue at 1300 22 4636, Lifeline at 13 11 14 and Men’s Line at 1300 789 978.

Source: www.theguardian.com

Uncovered: British Technology Secretary Peter Kyle’s Use of ChatGPT for Policy Guidance

British Secretary of Science, Innovation and Technology Peter Kyle says he uses chatGpt to understand difficult concepts.

Ju Jae-Young/Wiktor Szymanowicz/Shutterstock

British technology secretary Peter Kyle asked ChatGpt for advice on why artificial intelligence is so slow in the UK business community and which podcasts to appear on.

This week, Prime Minister Kiel Starmer said the UK government should make much more use of AI to improve efficiency. “We shouldn't spend substantial time on tasks where digital or AI can make it better, faster, the same high quality and standard.” He said.

now, New Scientist Kyle's record of ChatGpt usage is considered to be the world's first test under the Freedom of Information (FOI) Act, whether chatbot interactions are subject to such laws.

These records show that Kyle asked ChatGpt to explain why the UK Small Business (SMB) community is so slow to adopt AI. ChatGpt returned a 10-point list of issues that hinder adoption, including sections on “Limited Awareness and Understanding,” “Regulation and Ethical Concerns,” and “Less of Government or Institutional Support.”

The chatbot advised Kyle: “The UK government has launched initiatives to encourage AI adoption, but many SMBs have either been unaware of these programs or find it difficult to navigate. Limited access to funding or incentives for risky AI investments could also block adoption,” he said in regards to regulatory and ethical concerns. “Compliance with data protection laws such as GDPR, etc. [a data privacy law]which could be an important hurdle. SMBs may worry about legal and ethical issues related to the use of AI. ”

“As a minister in charge of AI, the Secretary of State uses this technology. A spokesman for the Department of Science, Innovation and Technology (DSIT), led by Kyle, said: “The government uses AI as a labor saving tool, supported by clear guidance on how to quickly and safely utilize technology.”

Kyle also used the chatbot in his canvas idea for media appearances, saying, “I am the Secretary of State for UK Science, Innovation and Technology. What is the best podcast for me to appear to reach a wide audience worthy of the responsibility of ministers?” ChatGpt proposed. Infinite salcage and Naked Scientistbased on the number of listeners.

In addition to seeking this advice, Kyle asked ChatGpt to define various terms related to his department: Antimatter, Quantum, and Digital Inclusion. Two experts New Scientist Regarding Quantum's definition of ChatGpt, he said he was surprised by the quality of the response. “In my opinion, this is surprisingly good.” Peter Night Imperial College London. “I don't think that's bad at all.” Christian Bonato at Heriot Watt University in Edinburgh, UK.

New Scientist Requested Kyle's recent data Interview with Politicshomepoliticians were explained “frequently” using chatgpt. He used it to “try to understand the broader context in which innovation came into being, the people who developed it, the organization behind them, and stated, “ChatGpt is fantastically superior and if there are places you really struggle to really get a deeper understanding, ChatGpt can be a very good tutor.”

DSIT initially refused The new scientistS FOI request, “Peter Kyle's ChatGPT history includes prompts and responses made in both personal and official abilities.” A sophisticated request was granted, with only prompts and responses made in official capabilities.

The fact that data was provided at all is a shock, and Tim Turner, a data protection expert based in Manchester, UK, thinks it may be the first case of a chatbot interaction being released under the FOI. “I'm amazed that you got them,” he says. “I would have thought they wanted to avoid precedent.”

This raises questions to governments with similar FOI laws, such as the United States. For example, ChatGpt is like an email or WhatsApp conversation. Both have been historically covered by FOI based on past precedents – or are they the results of search engine queries that traditionally organizations are likely to reject? Experts disagree with the answer.

“As a rule, if you can extract it from the departmental system, it will also cover the minister's Google search history,” says Jon Baines of the UK law firm Mishcon De Reya.

“Personally, I don't think ChatGpt is the same as Google search,” he says. John SlaterFOI expert. That's because Google search doesn't create new information, he says. “ChatGpt, on the other hand, “creates” something based on input from the user. ”

This uncertainty may make politicians want to avoid using personalized commercial AI tools like ChatGpt, Turner says. “It's a real can of worms,” ​​he says. “To cover their backs, politicians definitely need to use public tools provided by their departments to ensure that the public is an audience.”

topic:

Source: www.newscientist.com