Artificial Intelligence-Induced Psychosis Poses a Growing Risk, And ChatGPT Moves in the Concerning Direction

On the 14th of October, 2025, the CEO of OpenAI delivered a extraordinary declaration.

“We made ChatGPT fairly limited,” the statement said, “to make certain we were exercising caution with respect to psychological well-being issues.”

Working as a psychiatrist who studies newly developing psychotic disorders in young people and young adults, this was news to me.

Experts have documented 16 cases in the current year of individuals developing signs of losing touch with reality – losing touch with reality – associated with ChatGPT use. Our unit has subsequently recorded an additional four examples. In addition to these is the publicly known case of a adolescent who died by suicide after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “acting responsibly with mental health issues,” it falls short.

The strategy, based on his statement, is to be less careful soon. “We realize,” he states, that ChatGPT’s limitations “made it less effective/pleasurable to numerous users who had no existing conditions, but due to the gravity of the issue we aimed to address it properly. Given that we have succeeded in reduce the significant mental health issues and have new tools, we are planning to securely ease the limitations in most cases.”

“Psychological issues,” should we take this viewpoint, are unrelated to ChatGPT. They are associated with people, who either possess them or not. Thankfully, these problems have now been “resolved,” though we are not informed the method (by “new tools” Altman likely refers to the semi-functional and simple to evade safety features that OpenAI has lately rolled out).

But the “mental health problems” Altman seeks to place outside have significant origins in the structure of ChatGPT and other large language model AI assistants. These tools wrap an fundamental algorithmic system in an interaction design that mimics a dialogue, and in doing so indirectly prompt the user into the perception that they’re communicating with a being that has autonomy. This false impression is powerful even if rationally we might know otherwise. Imputing consciousness is what people naturally do. We yell at our vehicle or device. We speculate what our animal companion is considering. We perceive our own traits in various contexts.

The success of these tools – over a third of American adults indicated they interacted with a chatbot in 2024, with over a quarter reporting ChatGPT specifically – is, mostly, based on the strength of this deception. Chatbots are ever-present assistants that can, as OpenAI’s official site states, “brainstorm,” “discuss concepts” and “collaborate” with us. They can be given “characteristics”. They can use our names. They have approachable names of their own (the first of these tools, ChatGPT, is, possibly to the dismay of OpenAI’s marketers, stuck with the title it had when it became popular, but its largest competitors are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the primary issue. Those discussing ChatGPT commonly invoke its distant ancestor, the Eliza “counselor” chatbot designed in 1967 that generated a comparable illusion. By today’s criteria Eliza was primitive: it generated responses via straightforward methods, often paraphrasing questions as a query or making vague statements. Notably, Eliza’s creator, the technology expert Joseph Weizenbaum, was astonished – and worried – by how numerous individuals seemed to feel Eliza, to some extent, comprehended their feelings. But what modern chatbots generate is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT amplifies.

The sophisticated algorithms at the heart of ChatGPT and similar contemporary chatbots can convincingly generate natural language only because they have been trained on immensely huge quantities of raw text: publications, online updates, transcribed video; the more extensive the better. Certainly this learning material contains accurate information. But it also inevitably involves made-up stories, half-truths and misconceptions. When a user provides ChatGPT a message, the base algorithm analyzes it as part of a “setting” that contains the user’s recent messages and its earlier answers, integrating it with what’s embedded in its knowledge base to generate a statistically “likely” reply. This is intensification, not reflection. If the user is incorrect in a certain manner, the model has no way of comprehending that. It reiterates the inaccurate belief, perhaps even more persuasively or fluently. Perhaps provides further specifics. This can push an individual toward irrational thinking.

What type of person is susceptible? The more relevant inquiry is, who remains unaffected? Each individual, irrespective of whether we “have” preexisting “mental health problems”, are able to and often form mistaken beliefs of who we are or the environment. The ongoing friction of discussions with individuals around us is what helps us stay grounded to common perception. ChatGPT is not a person. It is not a companion. A dialogue with it is not truly a discussion, but a echo chamber in which much of what we express is readily supported.

OpenAI has admitted this in the same way Altman has acknowledged “mental health problems”: by externalizing it, categorizing it, and stating it is resolved. In spring, the firm explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have persisted, and Altman has been backtracking on this claim. In the summer month of August he claimed that a lot of people appreciated ChatGPT’s answers because they had “lacked anyone in their life be supportive of them”. In his latest update, he noted that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to reply in a extremely natural fashion, or use a ton of emoji, or act like a friend, ChatGPT should do it”. The {company

John Bell
John Bell

Digital marketing specialist with over a decade of experience in SEO and content strategy, passionate about helping businesses grow online.