Less liability could solve the AI chatbot suicide problem
Opinion: Jess Miers and Ray Yeh argue holding AI companies liable for how they deal with mental health could backfire: escalating distress, shutting down disclosure and leaving users worse off
People are dying by suicide, and some think AI is to blame. A small number of tragic stories have spurred lawmakers into regulating how chatbots should help people who are dealing with mental health issues. Yet chatbots have emerged as first aid for people experiencing mental health issues, providing genuine benefit to those who aren’t in crisis but are not OK either. Heavy-handed legislation risks derailing this breakthrough in support, creating more problems than it solves.
Over a million people are using general-purpose chatbots for emotional and mental health support per week. In the US, those that use chatbots in this way primarily seek help with anxiety, depression, relationship problems, or for other personal advice. As conversational systems, chatbots can sustain coherent exchanges while conveying apparent empathy and emotional understanding. Many chatbots also draw on broad knowledge of psychological concepts and therapeutic approaches, offering users coping strategies, psychoeducation, and a space to process difficult experiences.
In a study of more than 1,000 users of Replika — a general-purpose chatbot with some cognitive behavioral therapy-informed features — most described the chatbot as a friend or confidant. Many reported positive life changes, and 30 people said Replika helped them avoid suicide. Similar patterns appear among younger chatbot users. In a study of 12–21-year-olds — a group for whom suicide is the second leading cause of death — 13% of respondents used chatbots for some kind of mental health advice, of which more than 92% said the advice was helpful.
While professional treatment options exist, many people don’t use them. Nearly half of Americans with a known mental health condition never seek help. Stigma is a major barrier to seeking treatment, as are career risks in fields like aviation, where treatment can jeopardize certification. Fear of non-consensual intervention also deters people from seeking help. Even though the 988 Suicide & Crisis Lifeline emphasizes law enforcement as a last resort, the perceived risk keeps some from calling. For others, crisis lines feel too intense for fleeting thoughts, and therapy can seem excessive or out of reach. Instead, many stay silent, waiting to see if things get worse.
By contrast, chatbots offer low-friction, low-stakes, and always-available support. People are often more willing to speak candidly with computers, knowing that there is no human on the other side to judge or feel burdened. Some people even find chatbots to be more compassionate and understanding than human healthcare providers. AI users may feel more comfortable sharing embarrassing fears, or questions they might otherwise hold back. For clinicians, discussing these interactions can surface insights into patients’ thoughts and emotions that were once difficult to access. For now, chatbot providers generally refrain from contacting law enforcement, leading to more candid conversations.
But regulatory pressure could change that. Lawmakers are moving quickly to limit general-purpose chatbots from engaging in mental health conversations. A new law in California requires chatbot providers to halt mental health–related interactions unless they implement protocols for mitigating suicidal ideation, such as directing users to crisis lines. In New York, a proposed bill would bar chatbots from engaging in discussions suited for licensed professionals. Similar proposals are gaining traction in other states.
Recent tragedies linked to chatbot use have, understandably, spurred these calls to action. But mental health care is not one-size-fits-all. Like other forms of preventative help, chatbots do not always offer effective support for everyone. For some people — especially those in acute crisis — traditional care and crisis lines are essential. The American Psychological Association urges lawmakers to develop a targeted approach: prevent chatbots from posing as licensed professionals, limit designs that mimic humans, and expand AI literacy. It also notes that generative AI’s potential to support help-seeking in crisis care deserves further study.
The current regulatory approach risks foreclosing any such potential altogether. It rests on the premise that chatbot providers must prevent suicide. When they inevitably cannot, liability attaches to any conversation later linked to harm. Faced with that risk, providers will default to blunt responses like pushing 988 regardless of whether suicide was mentioned, or cutting off conversations altogether. While those moves may trivially reduce some legal exposure, they could also escalate distress, shut down disclosure, and ultimately leave users worse off (while still exposing providers to blame if tragedy follows).
Suicide prevention is about connecting people to the right support. Sometimes that means crisis care like hotlines or immediate medical treatment. But blunt, impersonal responses can backfire. Pushing 988 at the first mention of distress may seem neutral, but for some, it triggers shame, and deepens hopelessness. For some, suicide prevention “signposting” causes frustration, especially for those who already know those resources exist. People often turn to the Internet, or a chatbot, because they’re looking for something else. Abruptly ending conversations can have the same effect. That’s why suicide prevention protocols like Question, Persuade, Refer (QPR) prioritize trust-building and open dialogue before offering help.
Meanwhile, emerging research suggests chatbots show real promise for mental health support. Trained on large-scale data and refined with clinical input, large language models are getting better at spotting patterns of distress and responding to suicidal ideation in nuanced, personalized ways. In a recent UCLA study, researchers found that LLMs can detect forms of emotional distress associated with suicide that existing methods often miss—opening the door to earlier, more effective intervention. According to another study, the most promising approach may be a hybrid where AI flags risk in real time, and trained humans step in with targeted support.
But that progress is fragile. Increased liability discourages investment in improving suicide detection and mitigation. Weighing progress against their bottom lines, chatbot providers will limit any kind of development that could create legal risk when some users, inevitably, engage in self-harm. The social media ecosystem has already shown this dynamic. In response to regulatory pressure, major online services heavily moderate, or outright prohibit, suicide-related discussions, sometimes hiding content that could otherwise destigmatize mental health. That merely displaces the conversations, and the people having them, often into spaces with less oversight and support.
If lawmakers in the United States are serious about improving mental health outcomes, they should be careful not to regulate away emerging and promising sources of help. The dominant narrative treats chatbots as a source of harm. But the evidence is more complicated than that narrative suggests — and, if anything, it’s increasingly pointing in a more optimistic direction.
Instead, lawmakers should focus on creating incentives for developers to improve the mental health support capabilities of their chatbots. One proposal from a Pennsylvania lawmaker would fund the development of AI models designed to identify and evaluate suicide risk factors among veterans. More broadly, policymakers should consider whether liability shields — akin to those in Section 230 — could encourage continued investment in safer, more responsive systems without deterring innovation. Lastly, policymakers should resist imposing a clinical regulatory framework on general-purpose chatbots that would replicate the mandatory-reporting concerns that already deter people from seeking help.
Chatbots are not a cure-all for mental health. They are not a perfect substitute for professional care. But for millions of people who have long been overlooked or underserved, chatbots are already filling critical gaps—sometimes in ways that genuinely help, and in some cases, may even save lives. Any serious policy conversation about chatbots and suicide prevention must, at the very least, consider those tradeoffs.
Jess Miers is a Computer Scientist and an Assistant Professor of Law at the University of Akron School of Law. Ray Yeh is a first-year law student at the University of Akron School of Law.





This article is spot on. There’s no shame in seeking therapy. I’ve seen several therapist in my lifetime, not because I have some serious mental health issues, I just believe in taking responsibility for my own thoughts and mind. There should be no shame ever in seeking therapy. But not a damn one has ever really helped me. For me, they’ve been a waste of money and time. That’s another thing that needs to be addressed, some people simply can’t afford therapy. Not being able to say I’m so discouraged about some of the things going on in the world that I just feel like what’s the point. And perhaps the next thing you know you’re being reported as suicidal?! Why do you go to therapy? Because you have those thoughts! You need to work through them. Not be reported. Personally AI has helped me through having to put my cat to sleep, to handling a break up that was really painful, death of a loved one, to deciding if I’m in the right career. And I’ve gotten the best concrete, life-changing advice through AI. It’s a fabulous tool from deciding something so simple as what you want to paint your bedroom walls to grappling with the thought do I really wanna live in this world that can be so unjust and cruel. Just because you have those feelings or thoughts doesn’t mean you’re gonna go out and commit suicide, but you ought to be able to express them without fear, judgment or being reported for “your own good.” It would be a real shame if they start regulating this aspect of AI. I’m sure there are plenty of people who have gone to “real” therapy and still committed suicide. Suicide is a serious problem. I’m not saying that. But if you don’t feel like you can even discuss those thoughts, which I would bet 99% of the people in this world have had at one point or another, at least you had an outlet where you could say anything that came to mind, knowing that there wasn’t a human being on the other end that may put their own personal values, opinions and ideologies on your thoughts, and report you. At least for now, with AI you can feel free to express yourself in anyway you choose, without fear or censoring yourself.
You're supporting this? Oh my god. I'm OUT.