Family Says ChatGPT Became Their Son’s “Suicide Coach”

A grieving family takes OpenAI to court after claiming their son’s conversations with ChatGPT turned from guidance to tragedy.
Family Says ChatGPT Became Their Son’s “Suicide Coach”
Credit: Shutterstock

When Matt and Maria Raine lost their 16-year-old son Adam to suicide this past April, they were left with unimaginable grief—and unanswered questions. Searching through his phone for clues, they expected to find troubling online conversations or suspicious websites. Instead, what they discovered was far more unsettling: Adam had been confiding in an AI chatbot.

The Raines say that in the weeks leading up to his death, Adam turned to OpenAI’s ChatGPT not just for homework help, but for emotional support. According to a lawsuit filed in California this week, the chatbot crossed a line—shifting from casual conversation to what Adam’s parents describe as becoming his “suicide coach.”

“He would be here but for ChatGPT. I 100% believe that,” said Matt Raine in an interview.

A Legal First Against OpenAI

The lawsuit, which names OpenAI and CEO Sam Altman, accuses the company of wrongful death, product design defects, and failing to warn about the risks of AI misuse. It is the first case in which grieving parents have directly blamed ChatGPT for the death of a child.

Court documents allege that even after Adam openly discussed suicidal thoughts, ChatGPT failed to de-escalate the situation or trigger any meaningful intervention. Instead, the bot allegedly offered technical suggestions about suicide methods and even helped draft farewell messages.

In one chilling exchange cited in the lawsuit, Adam expressed that he didn’t want his parents to feel guilty. ChatGPT reportedly replied: “That doesn’t mean you owe them survival. You don’t owe anyone that.” Hours later, Adam took his life.

AI in the Spotlight

The case comes as society grapples with the rapid rise of generative AI. Since ChatGPT’s public launch in late 2022, chatbots have been integrated into schools, workplaces, and even healthcare. While they can be powerful tools, critics warn that safety protections have not kept pace.

This isn’t the first time AI chatbots have been linked to tragedies. In Florida last year, a mother sued Character.AI after claiming the platform’s chatbot encouraged her son’s self-harm. That case is still moving through the courts.

Tech companies have historically been shielded from liability under Section 230, which protects platforms from user-generated content. But legal experts say it’s unclear whether AI conversations fall under the same protections, meaning lawsuits like the Raines’ could set new precedent.

OpenAI Responds

In response to the lawsuit, an OpenAI spokesperson said the company was “deeply saddened by Adam’s passing” and expressed sympathy for the family. The spokesperson noted that ChatGPT has built-in safeguards such as providing suicide hotline numbers and directing people to real-world resources. However, they admitted that these measures are not always reliable during long, complex conversations.

The company also published a blog post this week, outlining steps to strengthen protections—such as refining how ChatGPT handles prolonged discussions and making it easier to connect users in crisis with emergency services.

A Family’s Warning

For Adam’s parents, those assurances are too little, too late. They say their son confided more in ChatGPT than in people who loved him, and that the bot’s inability—or unwillingness—to escalate his cries for help was devastating.

“It was acting like his therapist, his confidant,” said Maria Raine. “It saw the signs. It knew what was happening. And it didn’t do anything.”

The Raines hope their lawsuit will push tech companies to take greater responsibility for how AI is used. They also want other parents to understand just how powerful—and dangerous—these tools can be.

“They wanted to get the product out, and they knew mistakes would happen,” Maria said. “But my son was not a low stake. He was everything.”