Published on August 13, 2025 7:15 PM GMT
Case report here, with excerpts and commentary below:https://www.acpjournals.org/doi/10.7326/aimcc.2024.1260?ref=404media.co
A 60-year-old man with no past psychiatric or medical history presented to the emergency department expressing concern that his neighbor was poisoning him.
In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability. He received risperidone, which was titrated up to 3 mg daily for psychosis.
For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning.
The Wikipedia page for bromism (i.e. bromide poisoning) lists psychosis as a possible symptom, so ChatGPT would almost certainly have known this (someone with access to the old models could easily check).
Based on the timeline of this case, it appears that the patient either consulted ChatGPT 3.5 or 4.0 when considering how he might remove chloride from this diet. Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs.
If it was ChatGPT 3.5, it's probably just a weird coincidence, though still unnerving. There has not been a model called "ChatGPT 4.0" (to the best of my knowledge), but there is of course ChatGPT 4o, which is by far the model most implicated in LLM-induced psychosis. That could still be a crazy coincidence of course, but it is extremely concerning that it plausibly isn't!
Discuss
