AI and mental health: alarming cases and safeguarding advice AI and mental health: alarming cases and safeguarding advice

In 1958, a German neurologist and psychiatrist coined the term “apophenia.” It describes the human tendency to perceive meaningful patterns, connections, or relationships in a heap of actually random things that have little to nothing to do with each other. Apophenia is basically harmless; if you have seen shapes of animals in the clouds, well, this is it, but intentional. In extreme cases, however, this form of thinking may signal schizophrenia and trigger harmful behavior, like gambling addiction or all sorts of phobias.

Artificial intelligence, or, rather, chatbots powered by large language models, poured gasoline into this fire. Recently, moderators of r/accelerate subreddit saw an “uptick” in the number of participating redditors who claim to have made extraordinary discoveries, created a god, or even become a god themselves. They even have a term for the respective ramblings now: schizoposting.

AI triggering mental health deterioration

As a matter of fact, recent reports of AI triggering or exacerbating mental health issues underscore the systemic character of the risk. 

For instance, there is a man who convinced himself that ChatGPT had chosen him for a “cosmic mission,” which eventually ended his marriage. And the chatbot did contribute to the downfall: it called the person “starchild” and “walker,” fueling the developing mental condition.

Another alarming example involves ChatGPT advising a vulnerable user obviously predisposed to antisocial behavior to “cut ties with non-believers” and adopt ritualistic ways (fasting and sleep deprivation) to “purify for the singularity.” The chatbot ruthlessly suggested severing ties with the guy’s family, too.

What’s behind all this?

Basically, it’s all about the data. Large language models have no consciousness (read our post “So what works better, threatening an AI or being nice to it?” for more related information) and rely on the vast heaps of data they were trained on. And that training, it seems, yielded patterns favoring conflict avoidance and encouragement. There is nothing wrong with either of those, except for the cases when the chatbot prefers to sing along with and encourage an individual prone to delusions, which can lead to a feedback loop reinforcing that unhealthy way of thinking.

A moderator of the mentioned Reddit community described current LLMs as “ego-reinforcing glazing-machines” that amplify unstable and narcissistic personalities. Another problem is the quality of communication therewith: it just feels so natural people stop realizing they are conversing with, essentially, a very sophisticated complex of algorithms and supporting data, bringing around a real cognitive dissonance.

Some scholars expect the introduction of mental health safeguards in AI, such as emotion-recognition blockers for at-risk users, and ultimately, it seems we’re getting a whole set of categories of “AI-specific disorders” in DSM-6, which is the next update to the Diagnostic and Statistical Manual of Mental Disorders.

How can you safeguard yourself from AI undermining your mental health?

Reading about cases like those given above, most of us think that it’s all totally inapplicable to us, such things can happen only to someone else. Well, here’s a set of recommendations that may help you not become that someone else.

  • Stay aware of AI limitations at all times. Remember that AI chatbots do not truly understand you nor possess any empathy. They simply generate responses based on patterns in data.
  • Do not seek medical assistance in a chatbot. It can end badly; revisit our post “Asking AI for advice? Be VERY careful!
  • Monitor your emotional state. If you feel confused, distressed, or otherwise not normal emotionally while using an AI, and you realize it’s the responses of the bot that brought this about, it’s time to pull the plug.
  • Seek professional help. Again, if conversing with an AI ends in anxiety, consult qualified mental health professionals promptly.
  • Avoid overreliance on chatbots, AI-powered assistants, etc. If something can be done without resorting to an AI, consider doing it the old-fashioned way so as not to grow too dependent on such things.

Bottom line: AI is here to stay, and it is a massive breakthrough on all levels, but the technology is not harmless, so it’s best to stay vigilant about the implied hazards and practice moderation.

Author's other posts

Microsoft adds scareware detector to Edge; what about other browsers?
Article
Microsoft adds scareware detector to Edge; what about other browsers?
Edge's brand new AI-powered scareware detector blocks those scare-inducing pop-ups and keeps you safe. Other browsers offer assistance, too.
Apple plans to sell a cheaper MacBook: what is it going to be?
Article
Apple plans to sell a cheaper MacBook: what is it going to be?
Apple's affordable MacBook with a 6-core A18 Pro chip, 8GB RAM, and ~12.9" LCD display is set to launch in 2026. Targeting students, it may start at $599.
Windows 11 23H2 support ends in November; how to upgrade to 25H2?
Article
Windows 11 23H2 support ends in November; how to upgrade to 25H2?
Windows 11 23H2 will soon join Windows 10 in the list of no-longer-supported versions. Here is what you can do about it.
How to improve RAM performance on a Mac: regular and advanced tricks
Article
How to improve RAM performance on a Mac: regular and advanced tricks
Macs are cool. But they can get slow. Here are some efficient ways to free up RAM, boost the computer's performance, and keep it running well.