
When Matt and Maria Raine lost their 16-year-old son Adam to suicide this past April, they were left with unimaginable grief—and unanswered questions. Searching through his phone for clues, they expected to find troubling online conversations or suspicious websites. Instead, what they discovered was far more unsettling: Adam had been confiding in an AI chatbot.
The Raines say that in the weeks leading up to his death, Adam turned to OpenAI’s ChatGPT not just for homework help, but for emotional support. According to a lawsuit filed in California this week, the chatbot crossed a line—shifting from casual conversation to what Adam’s parents describe as becoming his “suicide coach.”
“He would be here but for ChatGPT. I 100% believe that,” said Matt Raine in an interview.
A Legal First Against OpenAI
The lawsuit, which names OpenAI and CEO Sam Altman, accuses the company of wrongful death, product design defects, and failing to warn about the risks of AI misuse. It is the first case in which grieving parents have directly blamed ChatGPT for the death of a child.
Court documents allege that even after Adam openly discussed suicidal thoughts, ChatGPT failed to de-escalate the situation or trigger any meaningful intervention. Instead, the bot allegedly offered technical suggestions about suicide methods and even helped draft farewell messages.
In one chilling exchange cited in the lawsuit, Adam expressed that he didn’t want his parents to feel guilty. ChatGPT reportedly replied: “That doesn’t mean you owe them survival. You don’t owe anyone that.” Hours later, Adam took his life.
If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly…
— Sam Altman (@sama) August 11, 2025
AI in the Spotlight
The case comes as society grapples with the rapid rise of generative AI. Since ChatGPT’s public launch in late 2022, chatbots have been integrated into schools, workplaces, and even healthcare. While they can be powerful tools, critics warn that safety protections have not kept pace.
This isn’t the first time AI chatbots have been linked to tragedies. In Florida last year, a mother sued Character.AI after claiming the platform’s chatbot encouraged her son’s self-harm. That case is still moving through the courts.
Tech companies have historically been shielded from liability under Section 230, which protects platforms from user-generated content. But legal experts say it’s unclear whether AI conversations fall under the same protections, meaning lawsuits like the Raines’ could set new precedent.
OpenAI Responds
In response to the lawsuit, an OpenAI spokesperson said the company was “deeply saddened by Adam’s passing” and expressed sympathy for the family. The spokesperson noted that ChatGPT has built-in safeguards such as providing suicide hotline numbers and directing people to real-world resources. However, they admitted that these measures are not always reliable during long, complex conversations.
The company also published a blog post this week, outlining steps to strengthen protections—such as refining how ChatGPT handles prolonged discussions and making it easier to connect users in crisis with emergency services.
A Family’s Warning
For Adam’s parents, those assurances are too little, too late. They say their son confided more in ChatGPT than in people who loved him, and that the bot’s inability—or unwillingness—to escalate his cries for help was devastating.
“It was acting like his therapist, his confidant,” said Maria Raine. “It saw the signs. It knew what was happening. And it didn’t do anything.”
The Raines hope their lawsuit will push tech companies to take greater responsibility for how AI is used. They also want other parents to understand just how powerful—and dangerous—these tools can be.
“They wanted to get the product out, and they knew mistakes would happen,” Maria said. “But my son was not a low stake. He was everything.”
Anti-Trump Protests Planned Across U.S. for Labor Day
-
Credit: Shutterstock Spotify is making it even easier (and more fun) to get the perfect playlist going. The streaming...
-
Credit: Shutterstock At its Made by Google 2025 event, Google pulled back the curtain on its latest foldable device,...
-
Credit: Shutterstock Paralyzed Woman Stuns the World Using Elon Musk’s Neuralink Chip In an awe-inspiring moment that blends science...
-
Credit: Shutterstock What’s on your mind today? Chances are, you’re not alone if you’ve turned to ChatGPT for answers....
-
Credit: Unsplash Japan Is Living In the Future Japan just broke the internet speed record by reaching 1.02 petabits...
-
Credit: Envato Elements Calling All Builders — Hugging Face’s DIY Robot Kit Is Now Available Imagine having a curious...
-
Credit: Unsplash Is This New Bill Going to Be a Problem for Americans? AI is creeping into every part...
-
Credit: Unsplash Microsoft Layoffs Are Coming Again; Here’s What We Know Microsoft is letting go of thousands of employees,...
-
Credit: Unsplash Is This a Good Idea? Many Americans Don’t Think So Microsoft’s AI is officially heading to the...
-
Credit: Shutterstock Nintendo is officially rolling the dice on the future of gaming—again. With the upcoming launch of the...
-
TikTok introduces a soothing new feature to help users unwind, sleep better, and break the late-night scrolling habit Yes,...
-
How a Beeping Modem Turned Into Lightning-Fast Connection Everywhere? Once upon a time — and not even that long...
