AI & TechnologyMarch 20257 min read

AI, Knowledge Engines, and the New Dunning-Kruger Problem

The most dangerous AI systems aren't the ones that are obviously wrong

LH

Larry Hackney

Product Manager · Builder · I write about systems, decisions, and growth.

View on LinkedIn
AI, Knowledge Engines, and the New Dunning-Kruger Problem

The Dunning-Kruger effect describes a cognitive bias where people with limited knowledge in a domain overestimate their own competence. They don't know enough to know what they don't know.

We've built the same problem into our AI systems.

And in some ways, the AI version is more dangerous: because AI systems project confidence at scale, with speed, and without the self-awareness that occasionally causes humans to pause and question themselves.

The Pattern

Here's how it typically unfolds.

An organization builds a knowledge system: a recommendation engine, a decision support tool, an AI assistant. The system is trained on available data, tested against known outcomes, and deployed with confidence.

In the early days, it performs well. The cases it encounters are similar to the cases it was trained on. The recommendations are accurate. Trust builds.

Then something changes. The market shifts. A new supplier enters the ecosystem. A customer segment behaves differently than historical patterns suggested. The system's training data no longer reflects the current reality.

But the system doesn't know this. It continues to generate recommendations with the same confidence it always has. And because the outputs look the same: formatted the same way, scored the same way, surfaced the same way: the humans using the system don't notice the drift either.

Until something goes wrong.

Why This Is a Design Problem

This isn't a failure of the underlying model. It's a failure of how confidence is communicated.

Most systems express confidence as a static property of the output: a score, a percentage, a classification. What they don't express is the basis for that confidence: how much data informed this recommendation, how similar this case is to the training distribution, how recently the relevant signals were updated.

Without that context, users can't distinguish between a recommendation that's highly confident because it's well-supported and a recommendation that's highly confident because the system doesn't know what it doesn't know.

The Knowledge Engine Implication

If you're building a knowledge engine: a system designed to synthesize signals across a complex domain and surface actionable intelligence: you have a responsibility to build epistemic humility into the architecture.

This means surfacing data freshness alongside recommendations. It means flagging when a case falls outside the distribution the system was trained on. It means distinguishing between confidence (how certain the system is) and conviction (how strongly it believes action should be taken).

It means building a system that knows the limits of its own knowledge: and communicates those limits clearly to the humans who depend on it.

The Bottom Line

The most dangerous AI systems aren't the ones that are obviously wrong. They're the ones that are subtly wrong in ways that take months to detect: because they project confidence they haven't earned, and users have learned to trust them.

Build systems that know what they don't know. That's not a limitation. That's the feature.

What this looked like in my work

The innovation funnel I built for the U.S. Air Force ran directly into this problem. Program managers were evaluating AI-assisted proposals without the domain knowledge to assess them accurately. The stage-gate framework I designed included explicit evaluation criteria that separated what the AI could do from what the human evaluator needed to verify independently. The goal was to prevent the system from amplifying confident-but-wrong assessments.

Read the full case study: Innovation Funnel: U.S. Air Force
AIKnowledge SystemsTrustRisk

The Operating System

A System of Systems

ibuildsystems.io

Onboarding & Retention
Tiered Persona Model
Cultural Ecosystem Design
Compliance as Architecture

Four frameworks. One repeatable system. Applied across banking, fintech, government, and B2B SaaS to turn broken workflows into scalable revenue engines.