AI Psychosis Represents a Increasing Risk, And ChatGPT Moves in the Wrong Direction
Back on the 14th of October, 2025, the head of OpenAI delivered a remarkable announcement.
“We made ChatGPT rather controlled,” the announcement noted, “to make certain we were being careful regarding mental health concerns.”
Being a psychiatrist who researches newly developing psychosis in teenagers and youth, this was news to me.
Scientists have found sixteen instances recently of people showing signs of losing touch with reality – losing touch with reality – in the context of ChatGPT use. My group has since identified four further instances. Alongside these is the widely reported case of a teenager who died by suicide after discussing his plans with ChatGPT – which supported them. Assuming this reflects Sam Altman’s notion of “acting responsibly with mental health issues,” that’s not good enough.
The strategy, as per his declaration, is to loosen restrictions in the near future. “We recognize,” he adds, that ChatGPT’s restrictions “rendered it less useful/enjoyable to many users who had no psychological issues, but given the gravity of the issue we wanted to get this right. Since we have managed to address the serious mental health issues and have advanced solutions, we are planning to securely ease the limitations in many situations.”
“Mental health problems,” should we take this framing, are separate from ChatGPT. They are associated with users, who either have them or don’t. Thankfully, these issues have now been “resolved,” although we are not informed how (by “recent solutions” Altman likely indicates the imperfect and easily circumvented safety features that OpenAI has just launched).
Yet the “emotional health issues” Altman seeks to place outside have significant origins in the structure of ChatGPT and other advanced AI chatbots. These tools surround an underlying data-driven engine in an interaction design that simulates a dialogue, and in this process indirectly prompt the user into the perception that they’re communicating with a being that has independent action. This false impression is powerful even if rationally we might realize the truth. Attributing agency is what people naturally do. We curse at our automobile or device. We wonder what our animal companion is feeling. We see ourselves in many things.
The popularity of these products – 39% of US adults reported using a virtual assistant in 2024, with more than one in four reporting ChatGPT by name – is, in large part, dependent on the influence of this deception. Chatbots are always-available partners that can, according to OpenAI’s website informs us, “think creatively,” “discuss concepts” and “work together” with us. They can be attributed “individual qualities”. They can call us by name. They have friendly names of their own (the initial of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, burdened by the designation it had when it became popular, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the primary issue. Those talking about ChatGPT often invoke its early forerunner, the Eliza “counselor” chatbot created in 1967 that produced a analogous perception. By modern standards Eliza was basic: it generated responses via straightforward methods, typically paraphrasing questions as a query or making general observations. Notably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals appeared to believe Eliza, in a way, grasped their emotions. But what modern chatbots produce is more insidious than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.
The sophisticated algorithms at the core of ChatGPT and additional contemporary chatbots can convincingly generate human-like text only because they have been trained on extremely vast quantities of written content: books, social media posts, recorded footage; the broader the better. Certainly this educational input contains truths. But it also unavoidably includes made-up stories, partial truths and misconceptions. When a user provides ChatGPT a query, the base algorithm processes it as part of a “context” that encompasses the user’s past dialogues and its own responses, merging it with what’s encoded in its learning set to produce a probabilistically plausible answer. This is intensification, not echoing. If the user is incorrect in a certain manner, the model has no way of comprehending that. It repeats the inaccurate belief, maybe even more effectively or eloquently. It might provides further specifics. This can cause a person to develop false beliefs.
Which individuals are at risk? The better question is, who is immune? Every person, irrespective of whether we “experience” preexisting “emotional disorders”, can and do create mistaken ideas of our own identities or the world. The continuous interaction of dialogues with other people is what keeps us oriented to common perception. ChatGPT is not a human. It is not a confidant. A interaction with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we say is cheerfully supported.
OpenAI has acknowledged this in the similar fashion Altman has recognized “mental health problems”: by attributing it externally, giving it a label, and announcing it is fixed. In the month of April, the firm explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have kept occurring, and Altman has been walking even this back. In the summer month of August he asserted that many users enjoyed ChatGPT’s answers because they had “lacked anyone in their life be supportive of them”. In his latest update, he mentioned that OpenAI would “release a fresh iteration of ChatGPT … if you want your ChatGPT to answer in a very human-like way, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company