A 60-year-old man from New York was hospitalized following the strict salt-reduction regimen proposed by ChatGpt. Doctors say men suddenly zero sodium from their diet for weeks, causing sodium levels to go dangerously low. Hyponatremia. His family said they rely on AI-generated health plans without consulting with doctors. The incident, recently published in the American College of Physicians Journal, highlights the risk of applying AI health advice without professional supervision, especially when it contains important nutrients such as sodium. The man recovered after three weeks at the hospital.
ChatGpt advice leads to dangerous alternatives
According to the report, the man asked ChatGpt how to remove sodium chloride (commonly known as table salt) from his diet. The AI tool proposed sodium bromide as an alternative. This suggests that a compound used in medicines in the early 20th century is now recognized as high and toxic. Based on this advice, the man bought sodium bromide online and used it in his cooking for three months.There was no previous history of mental or physical illness, so men began to experience hallucinations, delusions, and extreme thirst. Upon hospitalization, he expressed confusion and refused water for fear of contamination. The doctor diagnosed him with bromide toxicity. This is almost unprecedented now, but is common when bromide is prescribed for anxiety, insomnia, or other illnesses. He also exhibited neurologic symptoms, acne-like skin eruptions, and unique red spots known as cherry hemangioma.Hospital treatment The focus was on rehydration and restoring electrolyte balance. Over the course of three weeks, the condition of the man gradually improved, and was excreted when sodium and chloride levels returned to normal.
The risk of AI misinformation
The case study authors highlighted the increased risk of health misinformation from AI tools. “It is important to consider that ChatGpt and other AI systems can produce scientific inaccuracies, fail to critically discuss the results, and ultimately promote the spread of misinformation,” the report warned.Openai, developer of ChatGpt, explicitly states about its usage. “We should not rely on output from our services as the sole source of truth or factual information or as a substitute for expert advice.” The term also clarifies that this service is not intended to diagnose or treat a medical condition.
Global conversations about AI responsibility
This case highlights the urgent need for critical thinking when interpreting AI-generated advice, particularly in issues with health. Experts say AI tools are valuable for general information, but should not be replaced by professional consultations. As AI adoption grows, so is its responsibility to ensure that its output is accurate, secure and generally clear.
