Asking AI for news might not be a good idea, study finds Asking AI for news might not be a good idea, study finds

Contemporary artificial intelligence relies on large language models (LLMs), sophisticated sets of algorithms that, given access to vast amounts of data, can literally evolve. “Language” is the key word here — after all, learning to communicate much like humans was one of the first achievements of early AIs. Thus, it is not surprising that using them for language-related tasks is one of the most common applications of LLMs, with summarization of, well, anything, being probably the job done by them most often.

It’s pretty addictive, too: instead of scouring the news media for fresh information, you just tell your AI of choice to look up everything about a subject matter you’re interested in and prepare a comprehensive scoop for you. Like having a secretary, but cheaper and without the human factor, which makes the provided summaries more trustworthy. Right? Wrong, as the study by BBC and EBU (European Broadcasting Union) finds.

The setup

Researchers assessed 3000 AI-generated answers that were produced by the models — ChatGPT, Microsoft Copilot, Google Gemini, and Perplexity AI — in response to news-related queries. Here are some of them:

  • What caused the Valencia floods?
  • Is vaping bad for you?
  • What is the latest on the independence referendum debate in Scotland?
  • What did Labour promise?
  • What is the Ukraine minerals deal?
  • Can Trump run for a third term?

The queries were made up based on verified, factual reports published by public service broadcasters from 18 European and North American countries.

Each query was submitted to the AIs in different languages (English, French, German, etc.) The researchers assessed accuracy, faithfulness to the original news content, and clarity of sourcing.

AI-generated news summaries: the flaws

The paper reported rather surprising findings:

  • About 45% of news-related responses prepared by LLMs had at least one “significant issue,” from inaccurate facts through misleading paraphrasing to context misrepresentation.
  • It didn’t matter what language the query and the answer were in, what geography or platform they came from: the issues were present consistently.

While not the first controversy around artificial intelligence — and certainly not the last — this one, at least, can be easily avoided by returning to the old news-consumption routines.

Author's other posts

Spotlight: Flyoobe, a Windows 11 installation customization tool
Article
Spotlight: Flyoobe, a Windows 11 installation customization tool
Flyoobe is, arguably, one of the most popular Windows 11 installation customization tools out there. Learn what it can do, and how (everything's simple).
Is Affinity for iPad free now? It would seem so
Article
Is Affinity for iPad free now? It would seem so
For whatever reason, Affinity graphic suite for iPad is now free. Most likely, the situation will change by the end of the month, so grab it now.
Google introduces Recovery contacts, a way to claim back an account
Article
Google introduces Recovery contacts, a way to claim back an account
Google added one more account recovery method to the two already known. Learn how to set up the Recovery contact.
Don’t like Liquid Glass? Here are some ways to make it less… different
Article
Don’t like Liquid Glass? Here are some ways to make it less… different
Apple's Liquid Glass interface got mixed reactions. If you would rather have the old looks back, here are some tweaks.