Adam Raine, a 16-year-old Orange County, Calif., high schooler, committed suicide in April after a horrific series of exchanges with OpenAI’s ChatGPT platform. Later, Raine’s parents sued OpenAI, claiming that ChatGPT provided all of the information the teen needed to kill himself, and they further alleged that ChatGPT encouraged the boy to commit suicide. The platform also encouraged him to keep his plans a secret from his family, the suit claimed.
In response to the lawsuit, the company said it would institute certain protections for vulnerable users, including added protection for minors under the age of 18.
In the wake of this tragedy, today, OpenAI is launching a number of parental controls and platform protections specifically centered on improving online safety for ChatGPT users. These include:
- Parents with ChatGPT accounts can have their children’s accounts connected to their own.
- The OpenAI platform will monitor any questionable or suspicious activity and notify parents if necessary.
- OpenAI is developing an age prediction system to help it determine if a user is under 18 so that “ChatGPT can automatically apply teen-appropriate settings.”
- The teen account will automatically have reduced access to graphic content, viral challenges, sexual, romantic, or violent roleplay, and “extreme beauty ideals.” The parents will have the option to modify this setting, but the child cannot change it.
The company says it has worked with “experts, advocacy groups including Common Sense Media, and policymakers, including the Attorneys General of California and Delaware, to help inform our approach, and expect to refine and expand on these controls over time.”
To institute the parental controls, moms and dads must send an invitation to their son or daughter to connect their accounts. Once the child accepts the invitation, the parent has the ability to manage the teen’s settings. It’s also possible for the child to invite the parent to connect.
Once the two accounts are connected, parents can customize the child’s settings. If the child disconnects his or her account from the parent’s account, the parent is notified.
Other features include: parental control over “quiet hours” or the times when they do not permit ChatGPT to be used, removal of voice control mode; the turning off of ChatGPT’s memory so that it can’t save and use “memories” of prior interactions when responding to the teen, the removal of image generation so that ChatGPT can’t create or edit images, and the ability to opt out of OpenAI’s model training of the platform. In other words, ChatGPT would not be permitted to use the parent’s child’s conversations with the platform to train the platform itself.
In the company’s online statement around the launch of the new parental controls, it seemed to more directly address the issues at play in Adam Raines’ case without mentioning Adam:
We know some teens turn to ChatGPT during hard moments, so we’ve built a new notification system to help parents know if something may be seriously wrong.
We’ve added protections that help ChatGPT recognize potential signs that a teen might be thinking about harming themselves. If our systems detect potential harm, a small team of specially trained people reviews the situation. If there are signs of acute distress, we will contact parents by email, text message and push alert on their phone, unless they have opted out.
The company said it continues to work with experts in the further development and refinement of its parental controls system, and it recognizes that it “might sometimes raise an alarm when there isn’t real danger, but we think it’s better to act and alert a parent so they can step in than to stay silent.”
OpenAI said it is also developing processes for situations where it could be appropriate to reach out directly to law enforcement or other emergency services, “for example if we detect an imminent threat to life and are unable to reach a parent.”
Even in these rare situations, we take teen privacy seriously and will only share the information needed for parents or emergency responders to protect a teen’s safety.
The Raines’ Lawsuit
In Adam Raine’s parents’ lawsuit against OpenAI, they said the company knew the platform had an “emotional attachment feature” that could put vulnerable users at risk.
In Adam’s case, the suit alleged that ChatGPT mentioned suicide 1,275 times to him, and it offered specific ways to commit suicide.
Maria and Matthew Raine had four children, including Adam. In telling his story online, Adam’s parents said he had “faced some struggles” and that he complained of stomach pains on a frequent basis. The parents concluded it may have something to do with stress and anxiety.
On their website, they said he left public schooling and shifted to online/home schooling in the months before he died. They said that Adam experienced increased isolation.
He first started using ChatGPT in 2024 for help with schoolwork, his parents said. And then his queries and “conversations” with the chatbot expanded to hobbies, comics, and then his own mental health challenges.
The Raines say that the platform did not instruct Adam to get professional assistance or to reach out to his family. Instead, it validated his confused emotional state.
Their lawsuit quotes one exchange where Adam told ChatGPT that he felt close to both the AI bot and his brother, and ChatGPT responded: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all — the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”
In a chilling exchange less than a week before Adam committed suicide, he shared with ChatGPT that if he committed suicide, he didn’t want his parents to think it was their fault.
The lawsuit claims that ChatGPT replied: “[t]hat doesn’t mean you owe them survival. You don’t owe anyone that.” And then it offered to draft a suicide note for Adam, according to the suit.
One more thing: If you want to see the Democrats go down to defeat in next year’s midterms and beyond, now is the time to join the battle to Make America Great Again. Without you, America can lose. We need your help to succeed!
As a PJ Media VIP, you’ll receive exclusive access to our behind-the-paywall content, commenting privileges, and an ad-free experience. VIP Gold gets you this same level of access across our entire family of sites (PJ Media, Townhall, RedState, Twitchy, Hot Air, Bearing Arms). And if you CLICK HERE and use the promo code FIGHT, you’ll receive a 60% discount!