Asking AI for news might not be a good idea, study finds Asking AI for news might not be a good idea, study finds

There's something seductive about asking ChatGPT or Google Gemini to summarize the day's news for you. Type a question, get a neat summary, skip the tedious part of reading multiple articles. It feels efficient, almost like having a personal research assistant who never takes lunch breaks.

But a recent study by the BBC and European Broadcasting Union found something unsettling: nearly half of AI-generated news summaries contain significant errors. Not minor quibbles or stylistic differences — actual factual problems, misleading paraphrasing, or context that's been warped enough to change the meaning.

What the study actually tested

Researchers put four major AI platforms through their paces: ChatGPT, Microsoft Copilot, Google Gemini, and Perplexity AI. They generated 3,000 responses to news-related questions — the kind of queries people actually ask these systems every day:

  • What caused the Valencia floods?
  • Is vaping bad for you?
  • What's the latest on Scotland's independence referendum debate?
  • What did Labour promise?
  • What is the Ukraine minerals deal?
  • Can Trump run for a third term?

The questions were based on verified, factual reporting from public service broadcasters across 18 countries in Europe and North America. Researchers asked the same questions in multiple languages (English, French, German, and others), then evaluated the AI responses for accuracy, faithfulness to the original reporting, and whether sources were cited clearly.

The results weren't great

About 45% of the AI-generated responses had at least one significant issue. That's nearly half — flip a coin odds that you're getting distorted information.

The problems ranged across a spectrum:

  • Inaccurate facts: Getting basic details wrong
  • Misleading paraphrasing: Rephrasing something in a way that shifts the meaning
  • Context misrepresentation: Leaving out crucial details that change how you'd interpret the information

What made the findings particularly troubling is the consistency. It didn't matter which AI platform was used, what language the query was in, or which region's news was being summarized. The issues showed up everywhere, suggesting this is a fundamental limitation of how LLMs handle news, not just a quirk of one company's implementation.

Why this happens (and why it's hard to fix)

Large language models are trained to generate plausible text based on patterns they've learned from massive datasets. They're prediction engines — given the words that came before, what word probably comes next?

They're remarkably good at this, which is why their output often sounds authoritative and well-informed.

But "sounds right" and "is right" are different things, especially with news. News requires:

  • Precise attribution (who said what, when)
  • Accurate representation of events (what actually happened)
  • Proper context (why it matters, what led to it)
  • Clear sourcing (where did this information come from)

LLMs can struggle with all of these because they're not retrieving facts from a database — they're generating text that resembles the patterns they've seen before. If the training data contained multiple contradictory accounts of an event, the model might blend them together into something that sounds coherent but is actually wrong. If important context requires connecting disparate pieces of information, the model might miss those connections.

The "hallucination" problem — when AI confidently states things that aren't true — is well-documented. But it's particularly dangerous with news because people trust news summaries. You're not asking the AI to write a creative story where accuracy doesn't matter. You're asking it to tell you what's happening in the world, and you're probably going to repeat that information or base decisions on it.

The illusion of the neutral summarizer

Here's what makes AI news summaries especially tricky: they feel objective. There's no byline, no obvious editorial voice, no publication with a known political slant. Just clean, straightforward summaries that present themselves as neutral compilations of facts.

But neutrality is an illusion. The AI's output reflects its training data, which reflects the biases, emphasis, and framing of whatever sources it learned from. When it paraphrases, it's making choices about which details to include, how to phrase them, what to emphasize. Those choices shape your understanding just as much as a human journalist's choices would — except you have no way to evaluate the AI's judgment or track record.

At least with traditional news sources, you know who's telling you the story. The New York Times has one perspective, Fox News has another, the BBC has a third. You can factor that in. With AI summaries, you're getting an opaque blend of sources processed through a black box algorithm.

What this means for how you get news

Does this mean you should never use AI for news-related tasks? Not necessarily, but it means you need to be more careful than you probably are.

If you're using AI to get news summaries:

  • Don't treat it as your only source. Cross-reference important information with actual reporting.
  • Check the citations. Some AI systems cite sources; verify that they actually say what the AI claims.
  • Be extra skeptical of breaking news. The more recent an event, the less likely the AI's training data includes accurate information about it.
  • Use it as a starting point, not an endpoint. Let AI help you figure out what to read, not replace reading entirely.

The old way — reading actual news articles from publications you trust — is slower and less convenient. But it's also more accurate, more transparent about where information comes from, and doesn't involve coin-flip odds of getting something significantly wrong.

AI is great for a lot of things. Staying informed about the world probably shouldn't be one of them, at least not yet. The technology might get better at this. For now, though, 45% error rates suggest we're not there.

Author's other posts

Microsoft adds scareware detector to Edge; what about other browsers?
Article
Microsoft adds scareware detector to Edge; what about other browsers?
Edge's brand new AI-powered scareware detector blocks those scare-inducing pop-ups and keeps you safe. Other browsers offer assistance, too.
Apple plans to sell a cheaper MacBook: what is it going to be?
Article
Apple plans to sell a cheaper MacBook: what is it going to be?
Apple's affordable MacBook with a 6-core A18 Pro chip, 8GB RAM, and ~12.9" LCD display is set to launch in 2026. Targeting students, it may start at $599.
Windows 11 23H2 support ends in November; how to upgrade to 25H2?
Article
Windows 11 23H2 support ends in November; how to upgrade to 25H2?
Windows 11 23H2 will soon join Windows 10 in the list of no-longer-supported versions. Here is what you can do about it.
How to improve RAM performance on a Mac: regular and advanced tricks
Article
How to improve RAM performance on a Mac: regular and advanced tricks
Macs are cool. But they can get slow. Here are some efficient ways to free up RAM, boost the computer's performance, and keep it running well.