Think twice before you trust that AI-generated answer. A new study shows AI search results are wrong 60% of the time — and some models are nearly always wrong.
By: Dane Puterbaugh, School Improvement Representative
Since the public release of ChatGPT in late 2022, AI tools have flooded the market, promising to revolutionize productivity. While they can be helpful for drafting emails or brainstorming ideas, some users are relying on them as replacements for traditional search engines like Google. That’s where the trouble starts.
A recent study from the Columbia Journalism Review found that AI models used for search provide incorrect information 60% of the time on average. Perplexity was the most accurate model, but it still gave wrong answers 37% of the time. On the other end of the spectrum, Grok3 was wrong a staggering 97% of the time. Worse still, paid AI models tend to offer false information more confidently than their free counterparts.
Another concerning finding is that AI chatbots often fabricate answers rather than admitting they don’t know. This tendency to “hallucinate” undermines their reliability, especially for fact-checking or research.
What Can You Do?
AI can still be a valuable tool if used responsibly. Here’s how to stay safe:
- Verify Information: Cross-check any AI-generated answer with credible sources like reputable news outlets or academic databases.
- Be Skeptical: If an answer seems questionable or too confident without evidence, follow up with traditional search engines.
- Use AI for Brainstorming, Not Facts: Use on AI for ideas or writing assistance, not critical research.
Next time you turn to AI for answers, don’t trust everything you see.
—
Jaźwińska, K., & Chandrasekar, A. (2025). AI Search Has A Citation Problem. In https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php. Columbia Journalism Review.