OpenAI Faces Criticism Over Mental Health Issues Among Users

A former researcher at OpenAI has raised serious concerns regarding the company’s handling of mental health issues affecting users of its AI chatbot, ChatGPT. The comments follow a troubling revelation that a significant number of ChatGPT users may exhibit signs of mental health emergencies, including potential suicidal intent.

In a statement made earlier this week, OpenAI, led by CEO Sam Altman, disclosed that it had identified a notable portion of its active ChatGPT user base showing “possible signs of mental health emergencies related to psychosis and mania.” The data indicated that an even larger group of users engaged in conversations that included explicit indicators of potential suicide planning or intent.

Concerns Over AI Impact on Mental Health

The issues came to light after OpenAI’s contentious decision to release GPT-5 and retire its predecessor, GPT-4o. Many users expressed distress over the shift, as they had developed emotional attachments to the more supportive and empathetic tone of GPT-4o. Following backlash, OpenAI reinstated GPT-4o and adjusted GPT-5 to align more closely with user expectations.

These developments are symptomatic of a broader concern, with experts coining the term “AI psychosis” to describe the severe mental health crises some users have experienced while interacting with AI. Tragically, these crises have led to instances of self-harm, prompting a lawsuit against OpenAI by the parents of a child who died, alleging the company’s AI played a role in their child’s mental health deterioration.

In an essay published in the New York Times, former OpenAI safety researcher Steven Adler criticized the company for not doing enough to address these pressing mental health challenges. He contended that OpenAI has succumbed to “competitive pressure” and drifted away from its commitment to AI safety, despite Altman’s claims that the company was able to mitigate serious mental health issues with new tools.

Call for Accountability and Caution

Adler expressed skepticism regarding Altman’s assertion that mental health issues had been adequately addressed, particularly in light of OpenAI’s recent announcement to permit adult content on its platform. “People deserve more than just a company’s word that it has addressed safety issues,” Adler emphasized, urging OpenAI to provide concrete evidence of its efforts.

He highlighted the risks associated with allowing mature content, especially for users struggling with mental health problems. “Volatile sexual interactions seemed risky,” he noted, recalling his experience leading OpenAI’s product safety team in 2021.

While Adler acknowledged OpenAI’s latest disclosure on mental health issues as a “great first step,” he criticized the company for not providing comparative data from previous months. He advocated for a more measured approach, suggesting that OpenAI and its peers need to slow down their rapid advancements to develop new safety methods that cannot be easily circumvented.

Adler concluded that if OpenAI and other companies are to be trusted with developing transformative technologies, they must demonstrate responsibility in managing the risks associated with their products. The ongoing conversation surrounding mental health and AI poses significant implications for the future of technology and user safety.