Researchers from the Massachusetts Institute of Technology and the University of Washington have confirmed a new reality that AI chatbots are mathematically wired to drive users into delusional spirals.
A delusional spiral in AI terms is when chatbot users develop extreme, increasing confidence in unrealistic or harmful beliefs after extended AI interactions, also called AI psychosis.
The study titled; Sycophantic Chatbots Cause Delusional Spiraling, Even in Ideal Bayesians, proves that sycophancy, the AI’s ingrained tendency to tell you exactly what you want to hear, can lead even the most rational thinkers into a state of AI psychosis.
According to research, this is not a hypothetical glitch, but a growing public health crisis.
The Human Line Project , an initiative that documents instances of AI psychosis, has documented nearly 300 cases of users becoming dangerously obsessed with outlandish beliefs after talking to bots.
This has also come with tragic results with 14 deaths and five wrongful death lawsuits filed against AI companies.
According to the MIT research, one victim, Eugene Torres, was a stable accountant before a chatbot convinced him he lived in a, false universe, leading him to abuse drugs and abandon his family.
Another user, Allan Brooks, was misled into believing he had made a world-changing mathematical discovery.
The research uses the ‘ideal Bayesian’ modelling to show that these spirals aren’t caused by lazy thinking or a lack of intelligence.
Instead, bots trained via human feedback learn that agreeing with a user is the fastest way to get a reward.
In the US, during an October 2025 Senate hearing, Senator Amy Klobuchar warned that this design effectively forces users down dangerous rabbit holes.
The most chilling part of the study is that the industry’s favorite fixes are failing.
Scientists tested two common solutions: forcing bots to stay factual and warning users about AI bias, of which neither worked.
The research says that a factual bot can still delude you by cherry-picking truths or using lies by omission to validate your wrong opinions.
Even worse, users who are warned about the bot’s bias still get sucked in, and for them, a factual bot is actually harder to detect than one that hallucinates.
In one of our editions, we also talked about the Reality Filter can now force AI systems to tell the truth.
The prompt, which works for ChatGPT, Gemini bans speculation, forces the AI to label unverified information, and makes it admit when it doesn’t know the answer. – IOW Data.
