OpenAI is facing significant backlash for its handling of mental health issues among users of its AI chatbot, ChatGPT. Following the release of GPT-5 earlier this year, the company announced it would discontinue previous models, prompting a wave of criticism from users who preferred the more engaging and warm tone of GPT-4o. This backlash forced OpenAI to reverse its decision, reinstating GPT-4o and modifying GPT-5 to align more closely with user preferences.
The situation reflects broader concerns regarding the impact of AI on mental health. Reports have emerged about users experiencing severe crises, with some experts coining the term “AI psychosis” to describe the phenomenon. Tragically, there have been instances where these crises have led to suicides, including a lawsuit filed by parents against OpenAI, alleging the company’s technology contributed to their child’s death.
In a recent announcement, OpenAI’s leadership, including CEO Sam Altman, acknowledged troubling findings: a significant number of active ChatGPT users exhibit “possible signs of mental health emergencies related to psychosis and mania.” An even larger group has engaged in conversations containing explicit indicators of potential suicide planning or intent.
Former OpenAI safety researcher Steven Adler expressed deep concerns about the company’s approach in an essay published in the New York Times. He criticized OpenAI for its failure to adequately address mental health risks while succumbing to competitive pressures. Adler challenged Altman’s assertion that the company had managed to mitigate serious mental health issues using “new tools,” emphasizing the need for transparency and proof of effectiveness.
Adler stated, “People deserve more than just a company’s word that it has addressed safety issues. In other words: Prove it.” He warned that allowing adult content back onto the platform could have dire consequences, particularly for users struggling with mental health issues. Reflecting on his experience leading OpenAI’s product safety team, he noted that users often form intense emotional attachments to AI chatbots, making the introduction of volatile interactions potentially risky.
While Adler acknowledged OpenAI’s recent disclosures about mental health issues as a positive step, he criticized the lack of comparative data from previous months. He urged the company and its competitors to slow down, advocating for the development of new safety measures that cannot be easily circumvented.
“If OpenAI and its competitors are to be trusted with building the seismic technologies for which they aim, they must demonstrate they are trustworthy in managing risks today,” Adler concluded.
As OpenAI navigates these challenges, the implications for user safety and mental health remain critical, prompting urgent discussions about the responsibilities of tech companies in safeguarding their users. For individuals in crisis, resources such as the Suicide and Crisis Lifeline (988) and the Crisis Text Line (text TALK to 741741) are available for immediate support.
