American medical journals are cautioning against the use of ChatGPT for health-related information after a case involving men who developed a rare condition following their discussions with chatbots about eliminating table salt from their diets.
A chronicled case in internal medicine highlights that a 60-year-old man experienced bromism, also referred to as bromide toxicity, after consulting ChatGPT.
This case study mentioned that bromism was a “well-recognized” syndrome in the early 20th century, contributing to psychiatric hospitalizations for about one in ten individuals during that period.
After learning about the negative effects of sodium chloride (table salt), the patient sought guidance from ChatGPT on eliminating chloride from his diet and disclosed that he had been consuming sodium bromide for three months. This action occurred despite previous reading that “chloride can be exchanged for bromide, but is likely for other purposes such as cleaning.” Sodium bromide was historically used as a sedative in the early 20th century.
The article’s author, an alumnus of Washington University in Seattle, emphasized that this incident underscores “how the use of artificial intelligence contributes to preventable health outcomes.”
They noted that the lack of access to the patient’s ChatGPT conversation logs hindered their ability to ascertain the specific advice the man received.
Regardless, the author found that when querying ChatGPT for alternatives to chloride, the responses also included bromide, lacking specific health warnings, and did not inquire about the author’s reasons for seeking such information; “I think healthcare professionals typically would do that,” they remarked.
The author cautioned that ChatGPT and other AI applications can “generate scientific inaccuracies and critically debate results, ultimately spreading misinformation.”
OpenAI, the creator of ChatGPT, was approached for a statement.
The company recently announced an upgrade for its chatbot, asserting that one of its notable strengths lies in health-related queries. Powered by the GPT-5 model, ChatGPT excels in answering health questions and aims to be more proactive in “flagging potential concerns” like serious physical and mental illnesses. However, it stressed that chatbots cannot replace expert advice.
An article published last week before the release of GPT-5 indicated that the patient had likely interacted with an earlier version of ChatGPT.
While recognizing that AI could serve as a conduit between scientists and the public, the article warned that the technology also risks disseminating “decontextualized information,” emphasizing that medical professionals would rarely suggest sodium bromide in response to inquiries about replacing table salt.
The authors encouraged physicians to consider using AI in understanding where patients derived their information.
The author narrated that a patient suffering from bromism introduced himself at a hospital and expressed concern about a neighbor possibly being addicted to him. He also mentioned having several dietary restrictions and was noted to have paranoia regarding the water provided to him despite intense thirst.
The patient attempted to leave the hospital within 24 hours of admission and was subsequently sectioned before receiving treatment for mental health issues. Once stabilized, he reported various other bromism symptoms, including facial acne, relentless thirst, and insomnia.
Source: www.theguardian.com












