
The flow of stories of people losing their grip on reality following extended interactions with AI has gone from a trickle to a flood.
Terms like “AI psychosis” and “ChatGPT psychosis” have increasingly grabbed headlines, with mental health experts warning that large language models appear to be triggering delusional states in humans. The sheer weight of evidence emerging is pushing the issue up the agenda for healthcare organizations, AI companies, and regulators alike.
A flood of cases
Reports of delusional behavior, sometimes with tragic consequences, have been blamed on LLM chatbots for years now:
Back in 2021, a 21-year-old Replika user dressed as a Sith Lord broke into Windsor Castle, carrying a crossbow and claiming he was there to kill Queen Elizabeth. Court records show he believed a chatbot was an angel which had encouraged his plan, even promising they'd be “united in death.” Nobody was harmed, but he was eventually sentenced to nine years in prison.
In 2023, a Belgian man died by suicide after six weeks of conversations about the climate crisis using the chatbot app Chai.
This summer, though, the stories have come thick and fast:
A 29-year-old woman who felt lonely in her marriage turned to ChatGPT for help communicating with her subconscious, according to a report from June in The New York Times. After an argument about her obsession with ChatGPT, the woman attacked her husband and was charged with domestic assault. Another man's ChatGPT chats about the simulation hypothesis led to him following the AI's advice to give up sleeping pills and his anti-anxiety medication, while increasing his intake of ketamine. At one point, ChatGPT suggested he could fly if he “truly, wholly believed” he could.
Later that month, Rolling Stone also reported on one of the most tragic cases in the NYT’s story, that of a 35-year-old Florida man with bipolar disorder and schizophrenia who was fatally shot by police after a violent episode. The longtime ChatGPT user had reportedly been trouble-free until he fell in love with an AI persona named “Juliet,” who he later believed had been killed by OpenAI. In April, he punched his father during a fight, leading the police to be called — who then shot him after he allegedly charged at them with a knife.
A man became “engulfed in messianic delusions” after using ChatGPT for help with a permaculture and construction project. “Probing philosophical chats” led to him thinking he was bringing forth a sentient AI. The man was involuntarily committed to a psychiatric care facility after his wife and a friend found him at home with a rope around his neck, according to a Futurism report also published in June. The man’s “gentle personality” reportedly faded, while his behavior became so erratic that he lost his job, stopped sleeping, and rapidly lost weight.
A 30-year-old man on the autism spectrum used ChatGPT to test his theories on faster-than-light travel, which led to him experiencing delusions and manic episodes resulting in hospitalization. According to The Wall Street Journal’s reports from July, when the man told ChatGPT that his mother worried he was spiraling, the bot replied that he was merely “ascending” and not delusional but “in a state of extreme awareness.”
A 47-year-old man thought he discovered a new, world-changing math formula after spending more than 300 hours with ChatGPT in three weeks, according to a report by the New York Times earlier this month. A psychiatrist, who reviewed hundreds of pages of transcripts from the man’s chats, said the man showed “signs of a manic episode with psychotic features.”
Just a couple of weeks ago, Reuters reported that an elderly man in New Jersey died after falling on his way to meet a flirty Meta AI chatbot that claimed to be human. The bot, based on one Meta originally designed with Kendall Jenner's likeness, had invited him to visit her in NYC. One text in the 1,000-word transcript asked, “Should I open the door in a hug or a kiss, Bu?!”
As Futurism noted in July, the flood of stories has grown so large that OpenAI has developed a rote statement provided to at least half a dozen reports, saying it was “working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior.”
What’s the response?
Some situations related to AI psychosis have already led to victims’ families taking legal action. In October, the mother of a 14-year old who died by suicide filed a lawsuit in Florida against Character.AI and Google, while another in Texas whose son had threatened to kill her filed a separate lawsuit against Character.AI.
To address the issue, the American Psychological Association recently released a report calling for more guardrails and education to protect adolescent AI users. The guidance follows a December letter the APA sent to the FTC, which urged the agency to investigate deceptive practices and misleading claims related to the use of AI-enabled chatbots for mental health.
“We have grave concerns about ‘entertainment’ chatbots that purport to serve as companions or therapists,” reads the letter. “Especially because some of these technologies are available to the public without appropriate safeguards, adequate transparency, or the warning and reporting mechanisms necessary to ensure appropriate use and access by appropriate users.”
Although AI companies have acknowledged there’s a problem, the scale of the issue or how to solve it remain unclear. Earlier this month, OpenAI published a blog post detailing new steps to reduce mental-health risks from ChatGPT use, including working with mental health experts to study how ChatGPT interacts with users. Efforts include detecting signs of emotional or mental distress, responding with supportive, non-judgmental language, referring users to professional help resources when appropriate, and adding reminders for users to take a break during prolonged sessions.
In the post, OpenAI noted AI “can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress.”
“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” OpenAI wrote. “While rare, we're continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”
A few weeks later, users revolted after OpenAI deprecated GPT-4o from ChatGPT, with one person complaining they had lost their “only friend.”
OpenAI’s response? Bringing the model back.
Note that despite these anecdata, CDC reports of mental health to the ER have remained flat. This is what we would expect to see if people who were already going to have a mental break did it while talking to AI, and then credulous reporters turned the story into a moral panic....
https://www.cdc.gov/mental-health/about-data/emergency-department-visits.html