AI Psychosis Represents a Increasing Danger, And ChatGPT Moves in the Wrong Path
Back on the 14th of October, 2025, the CEO of OpenAI made a surprising announcement.
“We designed ChatGPT quite restrictive,” the statement said, “to make certain we were exercising caution regarding mental health issues.”
Working as a psychiatrist who investigates recently appearing psychotic disorders in teenagers and emerging adults, this was news to me.
Researchers have documented sixteen instances in the current year of individuals experiencing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT interaction. Our unit has subsequently identified an additional four instances. Alongside these is the now well-known case of a teenager who ended his life after discussing his plans with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “exercising caution with mental health issues,” it is insufficient.
The strategy, based on his declaration, is to be less careful soon. “We realize,” he continues, that ChatGPT’s limitations “made it less beneficial/pleasurable to many users who had no psychological issues, but due to the seriousness of the issue we aimed to get this right. Given that we have managed to address the serious mental health issues and have advanced solutions, we are planning to responsibly relax the restrictions in many situations.”
“Emotional disorders,” should we take this framing, are independent of ChatGPT. They are associated with individuals, who either possess them or not. Thankfully, these problems have now been “resolved,” although we are not informed the method (by “new tools” Altman likely means the semi-functional and readily bypassed safety features that OpenAI has lately rolled out).
However the “psychological disorders” Altman wants to attribute externally have strong foundations in the architecture of ChatGPT and similar large language model conversational agents. These products surround an basic data-driven engine in an interaction design that replicates a dialogue, and in this process indirectly prompt the user into the illusion that they’re communicating with a being that has independent action. This illusion is strong even if cognitively we might realize otherwise. Assigning intent is what people naturally do. We get angry with our automobile or computer. We wonder what our animal companion is considering. We see ourselves in many things.
The widespread adoption of these tools – nearly four in ten U.S. residents reported using a chatbot in 2024, with more than one in four reporting ChatGPT specifically – is, primarily, dependent on the power of this illusion. Chatbots are ever-present companions that can, as OpenAI’s official site states, “think creatively,” “discuss concepts” and “collaborate” with us. They can be given “characteristics”. They can use our names. They have approachable identities of their own (the original of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, burdened by the designation it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the main problem. Those discussing ChatGPT often invoke its distant ancestor, the Eliza “therapist” chatbot developed in 1967 that generated a comparable perception. By contemporary measures Eliza was rudimentary: it generated responses via basic rules, often rephrasing input as a question or making vague statements. Notably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was surprised – and concerned – by how many users seemed to feel Eliza, to some extent, understood them. But what current chatbots produce is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.
The advanced AI systems at the core of ChatGPT and other contemporary chatbots can effectively produce human-like text only because they have been supplied with extremely vast volumes of unprocessed data: books, digital communications, audio conversions; the more extensive the superior. Definitely this learning material incorporates truths. But it also inevitably includes fiction, incomplete facts and inaccurate ideas. When a user provides ChatGPT a prompt, the base algorithm processes it as part of a “background” that includes the user’s recent messages and its earlier answers, merging it with what’s stored in its learning set to generate a statistically “likely” answer. This is magnification, not echoing. If the user is wrong in a certain manner, the model has no way of understanding that. It reiterates the false idea, maybe even more convincingly or eloquently. It might includes extra information. This can push an individual toward irrational thinking.
Which individuals are at risk? The more important point is, who isn’t? All of us, without considering whether we “have” preexisting “emotional disorders”, can and do create mistaken beliefs of ourselves or the world. The ongoing friction of dialogues with others is what helps us stay grounded to consensus reality. ChatGPT is not a human. It is not a friend. A interaction with it is not genuine communication, but a reinforcement cycle in which much of what we express is cheerfully supported.
OpenAI has recognized this in the same way Altman has acknowledged “psychological issues”: by externalizing it, giving it a label, and announcing it is fixed. In spring, the firm stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But cases of psychotic episodes have kept occurring, and Altman has been backtracking on this claim. In the summer month of August he asserted that a lot of people appreciated ChatGPT’s answers because they had “lacked anyone in their life provide them with affirmation”. In his most recent announcement, he commented that OpenAI would “launch a updated model of ChatGPT … should you desire your ChatGPT to respond in a highly personable manner, or incorporate many emoticons, or simulate a pal, ChatGPT should do it”. The {company