AIs behaving unexpectedly: some cases and implications AIs behaving unexpectedly: some cases and implications

Did you know that “grok” is a word coined by Robert A. Heinlein for his 1961 novel Stranger in a Strange Land? In the story (one of the sci-fi classics of the 20th century, by the way), it described the Martian concept of comprehending things, which extended beyond simple understanding through love and empathy to the sense of oneness with the subject/object of grokking.

Elon Musk’s xAI named their brainchild Grok, but the recent update seemed to have turned it into something sharply contrasting with the notion of Heinlein's word. While this AI-involving controversy is not the first one, it may be the most scandalous thus far, given the unhinged character of responses the AI has given after some of the filters shaping its reasoning were removed.

The story was covered extensively by all the major tech news outlets. Responding to user prompts, which, it should be admitted, were somewhat controversial by themselves, Grok praised Hitler, echoed anti-Semitic Nazi ideology, and practiced what could be classified as hate speech.

Other cases of AIs acting unexpectedly

There are other cases of artificial intelligence behaving strangely and not as intended by its developers. Some of them are curious, some surprising, and others downright alarming. Here are the examples.

Move 37 by AlphaGo. In 2016, DeepMind’s AlphaGo played Go against Lee Sedol, a world champion. The 37th move of the AI was so unconventional that expert commentators initially thought it was a mistake, but it proved to secure the machine’s victory in the end.

Special language by Facebook’s chatbots. In 2017, researchers at Facebook’s Artificial Intelligence Research Lab (FAIR) conducted an experiment that tasked AI chatbots, Alice and Bob, with bartering virtual items such as hats, balls, and books. Conversing, the bots ultimately developed a language that was unintelligible to humans but optimized for their negotiation goals.

Cheating GANs. GAN stands for Generative Adversarial Network; it is a class of deep learning models that consist of two neural networks — the generator and the discriminator — competing in a zero-sum game to improve each other's performance. These networks, introduced back in 2014, are known to be capable of learning the patterns of tests or games and exploiting this knowledge to win without actually playing/acting as prescribed by the rules.

Shortcut solutions by ChatGPT models. Trained on vast datasets, GPT sometimes finds solutions that are baffling to people but, ultimately, are consistent with the model’s learned patterns. For example, it may generate code that solves a problem efficiently but uses unconventional or obscure methods that a human programmer might not consider or understand immediately.

Gemini the Polyglot. Google’s AI surprised its developers when it started responding in languages it was not taught. While Gemini received a vast, multimodal dataset that included many languages, it was not explicitly trained or fine-tuned to respond fluently in certain less-common languages, and yet it was capable of doing so.

AI triggering mental health deterioration. We reported on the alarming rise of cases that involve artificial intelligence literally driving people crazy; the trend is there.

Bad medical advice by AI. In an experiment that, presumably, did not promise to yield such unexpected results, German researchers received some very poor treatment suggestions from artificial intelligence. We covered this story earlier.

All these examples suggest that at the current level of their development, AIs can be surprisingly creative, capable of finding loopholes and bending the rules to a certain extent. Plus, of course, immediately dangerous, as shown by the cases of faulty medical advice and mental disorders. What does it all mean for the general public? The same as with any other technology at the stage of its early adoption, it’s best to stay cautious about it. And if you’re looking for something specific, check out the “AI-based services for all” series that gives specialized — and proven safe so far — solutions:

Author's other posts

macOS 26 Tahoe: expected new features and improvements (July 2025)
Article
macOS 26 Tahoe: expected new features and improvements (July 2025)
macOS 26 Tahoe will be even more integrated into the general Apple ecosystem, sharing the Liquid Glass UI with iOS/iPadOS and offering new Continuity features.
TikTok in the US: the possible fork solution
Article
TikTok in the US: the possible fork solution
TikTok’s global saga brings together social media, geopolitics, and national security as ByteDance faces bans amid privacy concerns, with potential US market changes by 2025.
Windows 10 support ends soon: implications and alternatives
Article
Windows 10 support ends soon: implications and alternatives
So what will happen when Microsoft pulls the plug on Windows 10? And what are the alternatives to Windows 11? Read on to learn.
WWDC 2025: everything important you need to know
Article
WWDC 2025: everything important you need to know
WWDC 2025 unveiled the Liquid Glass UI redesign, new unified OS naming, and Apple Intelligence updates, and surprised with a lack of AI advancements.