Decidr logo
Back

As featured in Techopedia: Why AI still hallucinates and how to fix it

In the media

In a recent Techopedia feature, David Brudenell, Executive Director at Decidr, weighed in on one of the most persistent problems in AI today: hallucinations in large language models (LLMs) like ChatGPT, Claude and Gemini.

Why AI hallucinations happen and how to reduce them, according to Decidr’s David Brudenell

In a recent Techopedia feature, David Brudenell, Executive Director at Decidr, weighed in on one of the most persistent problems in AI today: hallucinations in large language models (LLMs) like ChatGPT, Claude and Gemini.

The article explores why these models, despite being widely adopted, still generate false or misleading outputs with confidence. It also examines the structural reasons behind these errors and why they’re so hard to eliminate.

Brudenell explains that hallucinations are not a glitch, but an outcome of how AI models are trained and evaluated. “During pretraining, models learn patterns of language, not truths,” he told Techopedia. Post-training doesn’t help much either. Current benchmarks tend to reward boldness over caution, reinforcing confident errors over honest uncertainty.

“We’ve built a system that can’t afford to admit what it doesn’t know,” Brudenell said. “We’ve created a rod for our own back.”

The risks go beyond factual errors. In law, medicine, and finance, an authoritative-sounding mistake can mislead real-world decisions. And in Brudenell’s view, it’s not just the tech that suffers—so does user trust.

The article also points to a path forward. Developers are testing tools like retrieval-augmented generation (RAG), better scoring methods, and transparency frameworks to allow models to express uncertainty rather than guess. But as Brudenell notes, technical fixes won’t be enough without changing the underlying incentives.

“The brilliance of the system is clear,” he said. “But its weakness—that it cannot admit weakness—remains unresolved.

Read the full feature here.

Share article