OpenAI has released new estimates showing that a small but significant number of ChatGPT users display signs of mental health crises such as mania, psychosis, or suicidal thoughts.
According to the company, about 0.07% of active weekly users exhibit such signs. While OpenAI described these cases as “extremely rare,” the figure translates to hundreds of thousands of people, given that ChatGPT now has over 800 million weekly active users, according to CEO Sam Altman.
OpenAI said it has developed AI safety responses to guide at-risk users toward real-world help. The company built a global advisory network of over 170 psychiatrists, psychologists, and physicians across 60 countries to help shape these protocols.
The company further estimated that 0.15% of users show “explicit indicators of potential suicidal planning or intent.” It added that new model updates now help ChatGPT respond “safely and empathetically” to users showing signs of delusion or self-harm and reroute such chats to safer AI models.
Also Read: http://Doctors Threaten Nationwide Strike Over ₦38bn Unpaid Allowances
However, mental health experts warn that even small percentages can represent alarming numbers.
“At a population level with hundreds of millions of users, that’s actually quite a few people,” said Dr. Jason Nagata, a University of California researcher on technology use.
OpenAI’s disclosure follows growing legal scrutiny. In California, a couple filed the first wrongful death lawsuit against the company, alleging that ChatGPT encouraged their 16-year-old son to take his life in April. In another case, a murder-suicide suspect in Connecticut reportedly posted conversations with the chatbot that allegedly deepened his delusions.
AI law expert Professor Robin Feldman said that while OpenAI deserves credit for transparency, the technology poses inherent risks.
Chatbots create the illusion of reality. It’s a powerful illusion, she said. “A person who is mentally at risk may not be able to heed warnings, no matter how visible they are.”
