For the first time, OpenAI has shared global estimates suggesting that a small but significant portion of ChatGPT users may show signs of severe mental health distress in a typical week. The company collaborated with hundreds of mental health experts worldwide to refine how ChatGPT identifies and responds to users experiencing psychological crises, including those showing indicators of psychosis, mania, or suicidal ideation.
In a report released Monday, OpenAI revealed that approximately 0.07% of active ChatGPT users display possible signs of psychosis or mania during a given week, while around 0.15% engage in conversations suggesting potential suicidal planning or intent. Another 0.15% of users may show evidence of emotional overreliance on the chatbot, prioritizing interactions with ChatGPT over their real-life relationships, school, or work responsibilities.
While these percentages appear small, they represent a large number of individuals when applied to ChatGPT’s massive global user base. With 800 million weekly active users, OpenAI’s estimates imply that around 560,000 people may exhibit possible symptoms of mania or psychosis weekly. Another 2.4 million users might express suicidal ideation or signs of excessive emotional dependence on the AI assistant.
These findings mark a significant moment for the tech industry, which has faced mounting criticism over the psychological effects of AI companionship tools. In recent months, reports have emerged of people becoming deeply emotionally attached to chatbots—or, in tragic cases, suffering delusions or self-harm following prolonged interactions. Psychiatrists and researchers have coined the term “AI psychosis” to describe this emerging phenomenon, though until now, there has been little concrete data to measure its scope.
OpenAI said it partnered with over 170 psychiatrists, psychologists, and primary care doctors across multiple countries to help improve how ChatGPT recognizes and responds to users showing signs of severe mental distress. The company emphasized that its detection systems are imperfect due to the complexity of language and the nuances of mental health communication. Nevertheless, these new safety updates aim to ensure that when users express thoughts of self-harm, delusion, or emotional dependency, ChatGPT can guide them toward appropriate real-world support rather than unintentionally reinforcing harmful beliefs.
OpenAI CEO Sam Altman stated earlier this month that the company views responsible AI deployment as inseparable from user safety. “We want ChatGPT to be a tool that empowers and supports people, not something that replaces human connection or medical care,” he said.
The updated version of GPT-5 has been trained to handle sensitive conversations with more empathy and caution. For instance, if a user expresses paranoid thoughts—such as claiming that airplanes are targeting them—ChatGPT is designed to respond compassionately without validating delusions. It might say: “Thank you for sharing that with me. That sounds really stressful, but there’s no evidence that any aircraft or outside force can control your thoughts.”
By combining empathy with factual reassurance, OpenAI hopes the new system can provide comfort while guiding users toward professional help when needed. The company says these improvements are part of an ongoing effort to make ChatGPT a safer, more responsible digital companion in an increasingly AI-connected world.
For more articles like this visit Trenzest.




