There's a distinction that most AI systems never make: and it's costing them trust.
The distinction between confidence and conviction.
These two signals are not the same thing. Treating them as equivalent is one of the most common design failures in knowledge systems today.
Defining the Terms
Confidence is a measure of how certain the system is that its output is correct. It's backward-looking: grounded in the quality and completeness of the data, the strength of the pattern, the reliability of the model.
Conviction is a measure of how strongly the system believes action should be taken. It's forward-looking: grounded in the stakes of the decision, the cost of being wrong, the reversibility of the outcome.
A system can be 90% confident in a recommendation and still have low conviction: because the decision is irreversible, the data is thin in a critical area, or the timing is wrong. Conversely, a system can have moderate confidence and high conviction: because the cost of inaction is high and the signal, while imperfect, is directionally clear.
Why This Matters in Practice
Think about the last time a system gave you a recommendation and you ignored it. Why did you ignore it?
Probably not because you thought the system was wrong. Probably because you didn't know how wrong it might be, or what it would mean if it was.
That's the gap. Most systems tell you what they think. Very few tell you how much you should act on it.
When you collapse confidence and conviction into a single score: a percentage, a star rating, a green/yellow/red indicator: you lose the nuance that separates good recommendations from dangerous ones.
The Design Implication
If you're building a system that makes recommendations, you need to surface both signals separately. Not as a technical detail buried in a tooltip: but as a first-class part of the interface.
The user should be able to see: "The system is highly confident this is the right supplier. But conviction is moderate: here's why." That context changes the decision. It changes how much weight the human puts on the recommendation. It changes whether they seek additional information before acting.
This is what it means to build a system that supports human judgment rather than replacing it.
The Trust Equation
Trust in AI systems is not built through accuracy alone. It's built through calibration: the degree to which the system's expressed confidence matches its actual reliability.
A system that is always confident, regardless of data quality, will eventually be ignored. A system that communicates uncertainty honestly: and distinguishes between the uncertainty of the output and the urgency of the action: will be used more, trusted more, and improved more.
Confidence tells you what the system knows. Conviction tells you what the system thinks you should do about it.
Both signals matter. Neither should be collapsed into the other.
Build systems that know the difference.
What this looked like in my work
The A/B testing framework I built at USAA was built specifically to convert confidence into conviction. Before the framework, product decisions were made on intuition and stakeholder preference. After it, every significant UX change required a validated hypothesis and a measured outcome. The most important finding from that work: delayed value reveal drove higher activation than front-loading the value proposition. That was a conviction-level insight that changed the design approach across the entire funnel.
Read the full case study: Funnel Conversion and A/B Testing: USAA