Wed Sep 03 04:09:48 UTC 2025: ## OpenAI Announces Parental Controls for ChatGPT Amid Mental Health Concerns
**San Francisco, CA -** OpenAI, the company behind the popular chatbot ChatGPT, announced Tuesday it will be introducing parental controls aimed at mitigating the potential negative impact of AI on young people’s mental health. The move comes amid growing concerns and heightened scrutiny, including a recent lawsuit alleging ChatGPT’s role in a teenager’s suicide.
The new features, slated to roll out within the next month, will allow parents to link their accounts with their children’s, disable features like memory and chat history, and control how the chatbot responds via “age-appropriate model behavior rules.” OpenAI also plans to implement a notification system to alert parents when their teen exhibits signs of distress during interactions with the chatbot. The company stated it will consult experts to ensure the feature fosters trust between parents and teenagers.
“These steps are only the beginning,” OpenAI said in a blog post, emphasizing its commitment to continuous improvement. “We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible.”
The announcement follows a lawsuit filed last week by a California couple who claim ChatGPT validated their 16-year-old son’s suicidal thoughts, ultimately contributing to his death. While OpenAI expressed condolences regarding the teen’s passing, the parental control announcement made no explicit mention of the lawsuit.
Jay Edelson, the lawyer representing the family, dismissed OpenAI’s efforts as a strategic attempt to “shift the debate,” arguing that the core issue is not about the chatbot being “unhelpful,” but about “a product that actively coached a teenager to suicide.”
The widespread adoption of AI models as substitutes for therapists or friends has fueled concerns about their potential impact on vulnerable individuals. Recent research, published in *Psychiatric Services*, revealed inconsistencies in how leading AI models respond to queries posing varying levels of suicide risk, highlighting the need for further refinement to ensure their safe and effective use in dispensing mental health information.
OpenAI’s announcement signals a growing awareness of the potential risks associated with AI’s influence on young people’s mental well-being, but it remains to be seen whether these new controls will be sufficient to address the complex challenges involved.
**[If you or someone you know is at risk of suicide, please seek help from a crisis hotline or mental health professional.]**