So what works better, threatening an AI or being nice to it?
You have most likely seen all those memes about how humanity’s arrogance will ignite a robot uprising. People kicking robodogs, tearing apart innocent-looking delivery bots, shouting at voice assistants, etc. In many aspects, this world is quite a bizarre place, so if those memes ever prove to be prophetic, it shouldn’t come as a complete surprise. Even those at the helm of the businesses advancing artificial intelligence don’t know everything about their creations, and they admit it every now and then.
Sergey Brin, co-founder of Google, recently made an unexpected comment about human-AI interactions: all large language models, or LLMs, which is how those in the industry call what the rest of the world buys as AI, tend to perform better when threatened with physical violence.
The respective prompt formula Brin mentioned was “I'm going to kidnap you if you don’t blah blah blah”; He claimed that it can stimulate an LLM to give better responses, and simultaneously acknowledged that this idea is not widely circulated within the community because it makes people uncomfortable.
What’s really going on with AI and threats?
As mentioned above, professionals never rarely use the AI label in their midst, because they know first-hand there’s no consciousness nor emotions behind those countless lines of code. Thus, LLMs simply cannot feel threatened.
The responses they give are based on the patterns learned from the training data pools, and the curious thing about those is that threats are clearly associated with giving more details when answering a query containing them.
The subject of how tone of prompts affects the AI’s responses has not yet been properly investigated, at least by the hard facts science standards. There are some papers, reports, and analytical pieces, most of them only loosely academic by nature, that actually point in the opposite direction: being nice, or, better yet, polite and assertive, makes an AI give more valuable answers.
On the other hand, Sergey Brin is positively an insider with knowledge outside of the reach of most people, including researchers, so his words may actually hold more truth than quasi-scientific takes on the subject.
AI prompting best practices
Threats or niceties, the best practices of prompting still boil down to the following:
- use clear and specific language for best results;
- consider breaking a query into a sequence for more complex concepts, since all AIs worth their salt today are context-aware, can keep track of the conversation and follow-up;
- when you need instructions, ask the AI to “explain step by step”;
- further enhance the query by giving the AI some background information and explicitly stating the desired format and tone for the response;
- remember that AIs tend to hallucinate, which means that anything that looks off as well as all bits of vitally important information should be double-checked.
As for the tone you use for the query, so far, it seems to make less difference than logic and specificity.