The parents of a teenage boy who killed himself after speaking to ChatGPT are suing OpenAI over his death.
16-year-old Adam Raine died on 11 April after talking about suicide with ChatGPT for months, according to the San Francisco lawsuit.
His parents, Matt and Maria, say OpenAI and its chief executive Sam Altman put profit over safety.
Warning: This article contains references to suicide that some readers may find distressing.
Adam initially used the AI bot to help him with school work, but it quickly “became Adam’s closest confidant, leading him to open up about his anxiety and mental distress”, according to the legal filing.
The bot gave the teenager detailed information on how to hide evidence of a failed suicide attempt and validated his suicidal thoughts, according to his parents.
ChatGPT even offered to draft a suicide note, according to the lawsuit.
OpenAI releases long-awaited GPT-5 AI chatbot upgrade
Tech giant OpenAI signs deal with government to boost efficiency in public services
OpenAI boss Sam Altman denies sister’s sexual abuse lawsuit claims
Adam had confided to ChatGPT that he didn’t want his parents to think he committed suicide because they did something wrong. ChatGPT told him: “[t]hat doesn’t mean you owe them survival. You don’t owe anyone that.”
It then offered to write the first draft of his note.
“This tragedy was not a glitch or unforeseen edge case – it was the predictable result of deliberate design choices,” wrote the Raine family’s lawyers.
Read more on Sky News:
Israel ‘killing a lot of journalists’
Three die in helicopter crash
SpaceX completes spectacular test flight
An OpenAI spokesperson said the company is saddened by Adam’s death and that ChatGPT includes safeguards such as directing people to crisis helplines.
“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” the spokesperson said.
Follow our channel and never miss an update
They added that OpenAI will continually improve on its safeguards and did not specifically address the lawsuit’s allegations.
OpenAI said in a blog post that it is planning to add parental controls and explore ways to connect users in crisis with real-world resources.
Be the first to get Breaking News
Install the Sky News app for free
It is also exploring building a network of licensed professionals who can respond through ChatGPT itself.
The case against OpenAI is thought to be the first legal action accusing OpenAI of wrongful death and seeks unspecified financial damages.
It is being jointly represented by Edelson and the Tech Justice Law Project, a legal advocacy group that last year filed a lawsuit with Character.ai over the death of another teenager.
Anyone feeling emotionally distressed or suicidal can call Samaritans for help on 116 123 or email [email protected] in the UK. In the US, call the Samaritans branch in your area or 1 (800) 273-TALK.