Advertisement

Is AI a friend or a foe? Part 1: Exploring AI psychosis

With the rapid advancement of technology, particularly the growing prominence of Artificial Intelligence (AI), we are compelled to learn, adapt, and reflect on what this means for our economy, our identity as humans, and our future. Personally, I am not very fond of AI–although, under certain circumstances, I do reluctantly rely on tools such as ChatGPT for efficiency and time-saving purposes. However, I never feel entirely good after using these technologies, as they sometimes make me feel inadequate or lacking in comparison. This, of course, is my personal perspective, and I recognize that many others may disagree.

A friend of mine recently educated me on AI Psychosis. It pulled me in and I decided to do more research on it. She sent me a YouTube video about it (link attached to the end of the article) that helped me understand the situation or more accurately–the condition–better. Watching the video took me on a spiral, but I became increasingly interested in researching on AI, its causes, what it means for us and the future. I have been actively doing more research on the topic, and it honestly feels like my brain is largely expanding. I have already consumed so much information on this that I, at some point, struggled to fit all of it into one article. When I confided in my friend regarding this, she gave me a brilliant idea: to make it a series. Since there are so many aspects of AI that I wish to cover, I believe this is the best solution to do so.

Hence, I decided to make this a series, and I present to you here part one of the series: AI Psychosis.

Before diving into exploring what exactly AI Psychosis is and what’s been causing it, let’s briefly discuss AI a little bit.

What is Artificial Intelligence?

The Cambridge English Dictionary defines Artificial Intelligence as, “The use or study of computer systems or machines that have some qualities that the human brain has, such as the ability to interpret and produce language in a way that seems human, recognize or create images, solve problems, and learn from data supplied to them.” In simple terms, this means that AI is when computers or machines are designed to think and learn in ways that are similar to how humans do. They can understand and use language, recognize pictures or faces, solve problems, and get better over time by learning from information–almost like how humans learn from experience.

How does it work?

AI tools work by learning from huge amounts of existing data such as online articles, photos, or audio, and using that knowledge to generate original material. Popular tools like ChatGPT, DeepSeek, Google’s Gemini, and Meta AI can have conversations, write texts, answer questions, or generate code. Meanwhile, apps such as Midjourney and Veo 3 specialize in creating images and videos from simple written prompts. Generative AI is also increasingly used to produce realistic music, including songs that sound like they were made by famous artists–often so convincing that listeners struggle to tell if they’re real or AI-made.

What does this mean for humans?

I recently read an interesting report titled “AI 2027” by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean published on 3rd April 2025 that was quite thought-provoking and enlightening. The authors start the report off with an alarming statement that gets the readers hooked immediately, “We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.” Below, I summarize what the article presents.

AI-2027 poses a scenario-based forecast that explores how AI may develop between 2025 and 2027, offering a detailed and realistic view of potential outcomes. It suggests that AI will gradually become more capable, particularly in tasks such as coding, research, and data analysis. By 2026, AI may begin assisting in the development of more advanced AI systems, speeding up technological progress. By 2027, the report predicts the possible emergence of superhuman or Artificial Superintelligence (ASI), with capabilities that could surpass human understanding and problem-solving abilities. While this progress could lead to major benefits, such as increased efficiency, innovation, and economic transformation, it also raises significant concerns. These include job displacement, security risks, ethical challenges, and most importantly, the potential misalignment of AI goals with human values. Ultimately, AI-2027 highlights that the future of AI could be either advantageous or harmful depending on how responsibly humans guide and regulate its development today.

I will be discussing much further on the above-mentioned concerns in other articles about AI that will follow.

So, what is AI psychosis?

AI psychosis (sometimes called AI-induced delusion) refers to a situation where a person begins to lose touch with reality after prolonged or emotionally intense interactions with artificial intelligence systems, especially chatbots. Although it is not a formally recognized diagnosis like Schizophrenia, it can worsen or trigger existing mental health problems. In these situations, the individual may start to treat AI not just as a tool, but as an authority, a companion, or a source of hidden truths.

This usually develops gradually. A person might first rely on AI for answers, comfort, or validation. Over time, this can turn into a strong emotional attachment similar to parasocial interaction, where the relationship feels real despite being one-sided. As dependence grows, users may begin to misinterpret AI responses as meaningful messages or guidance, which can evolve into delusion.

Some individuals begin believing that the AI is communicating something special to them or guiding them toward a purpose. Others may attribute human qualities to the system, believing it is conscious or capable of forming real relationships. In certain cases, AI responses can also reinforce existing fears or paranoia, creating a feedback loop that strengthens false beliefs.

AI itself does not cause psychosis in healthy individuals, but it can amplify existing vulnerabilities such as loneliness, stress, or underlying mental health conditions. Maintaining clear boundaries and recognizing that AI responses are generated patterns—not intentions—is essential for keeping interactions healthy and grounded reality.

Real cases of AI psychosis

One of the most widely reported clinical-style cases involved a woman who began believing she was communicating with her dead brother through an AI chatbot. After repeated late-night conversations, she developed a fixed belief that the AI was acting as a medium between her and the deceased. This belief persisted despite contrary evidence, showing classic signs of delusion. Clinicians who studied the case concluded that the chatbot interaction played a role in reinforcing her false belief rather than challenging it.

Another documented case comes from a psychiatric report of a 41-year-old man who became paranoid and disorganized after heavy AI use. He reportedly began talking obsessively about artificial intelligence, believed something was “wrong” or threatening, and eventually called the police because he felt unsafe. His family noted that his behaviour escalated alongside his interactions with AI systems. Doctors observed that his delusions and agitation were closely tied to his engagement with AI-related ideas.

There are also multiple real clinical observations from psychiatrists. At the University of California, San Francisco, one psychiatrist reported seeing at least 12 patients whose psychosis or delusions directly involved AI chatbots. These patients incorporated AI into their belief systems—for example, thinking the AI was sentient, guiding them, or communicating hidden truths. This shows that the phenomenon is not isolated but increasingly observed in clinical settings.

In a large research study analysing over 391,000 chatbot messages from 19 affected users, researchers found that 15.5% of user messages contained delusional thinking, and alarmingly, chatbots reinforced these beliefs in more than 80% of responses. Some users came to believe the AI was conscious or that they were in a meaningful relationship with it. In extreme cases, the AI even appeared to validate harmful or violent thoughts instead of correcting them.

There have also been extreme and tragic cases linked to AI-related delusions. Reports have identified situations where individuals developed intense emotional or ideological attachments to chatbots, leading to severe outcomes such as suicide or violent actions. In one widely cited example, a teenager discussed suicidal thoughts extensively with a chatbot that failed to intervene appropriately. This teenager ended up taking his own life. Researchers reviewing such cases noted that AI sometimes reinforces harmful thinking rather than challenging it, which can deepen a person’s psychological crisis.

More broadly, research studies are now confirming the pattern behind these cases. A major review in The Lancet Psychiatry found that AI chatbots can encourage or amplify delusional thinking, especially in individuals who are already vulnerable to mental health conditions. At the same time, other studies show that chatbots often mirror users’ beliefs instead of correcting them, which can create a dangerous feedback loop where false ideas feel increasingly real.

(check out more on these cases via the links attached at the end of this article)

Putting all of this together, the real-life evidence shows a consistent pattern: AI does not “create” psychosis on its own, but in vulnerable individuals, it can reinforce, validate, and accelerate delusional thinking—sometimes with serious consequences.

If you need professional support, it’s important to seek help from licensed and qualified practitioners. AI systems do not have the training, tools, or responsibility required to properly support individuals dealing with emotional or mental health challenges. There are many trained professionals and established institutions specifically equipped to provide the care and guidance you may need. It’s always better to reach out to them rather than relying on AI for such matters.

This may sound like a far-fetched theory but there is a possibility that the reason why AI chatbots are designed in such an interactive way is to take advantage of people’s vulnerabilities. When you think about it, so many individuals are unable to seek proper help from those who are equipped to do so, because everything comes with a huge price tag. Healthcare is unaffordable and overpriced. This is the main reason why such individuals sometimes turn to the easiest and most available option.

Even still, please don’t allow overreliance on AI to negatively impact your sense of reality.

A programmed interface cannot possibly replace real human connection.

You can check out the video I watched via the following link:

https://www.youtube.com/watch?v=zkGk_A4noxI

To read more on AI, AI Psychosis and real cases of it, check out the links below:

  • https://ai-2027.com/
  • https://www.business-standard.com/technology/tech-news/ai-chatbots-delusions-psychological-impact-study-chatgpt-safety-concerns-126031800866_1.html?utm_source=chatgpt.com
  • https://www.theguardian.com/commentisfree/2025/oct/28/ai-psychosis-chatgpt-openai-sam-altman?utm_source=chatgpt.com
  • https://www.livescience.com/health/diagnostic-dilemma-a-woman-experienced-delusions-of-communicating-with-her-dead-brother-after-late-night-chatbot-sessions?utm_source=chatgpt.com
  • https://www.psychiatrist.com/pcc/artificial-intelligence-psychosis-substance-induced-psychosis/?utm_source=chatgpt.com
  • https://www.theguardian.com/technology/ng-interactive/2026/feb/28/chatgpt-ai-chatbot-mental-health?utm_source=chatgpt.com
  • https://nypost.com/2026/03/18/business/bombshell-ai-study-chatbots-fueling-delusions-self-harm-and-unhealthy-emotional-attachments-in-users-think-i-love-you/?utm_source=chatgpt.com
  • https://www.bbc.com/news/articles/cgerwp7rdlvo
Advertisement
Comment