
A synthesis of 10 case studies across four organizations: from USAA's onboarding engine to iPROMOTEu's identity architecture: arguing that the architectural pattern enabling scale, reliability, and trust is a single, normalized decision layer above fragmented data sources.
Author
Larry Hackney
Case Studies
10 across 4 organizations
Chapters
5 + Conclusion
Read Time
~18 minutes
"Across 10 case studies spanning four organizations, two government contexts, and a decade of product work, a single architectural pattern emerges: the decision layer."
The decision layer is not a specific technology. It is an architectural principle: that the most reliable, scalable, and trustworthy systems normalize fragmented inputs into a coherent model before making decisions. Every case study in this paper is an application of this pattern at scale.
Every platform failure is, at its root, a decision failure. The question is never whether a system makes decisions: it always does. The question is whether those decisions are made deliberately, with the right inputs, or accidentally, with fragmented ones.
TL;DR
Five independent identity systems at iPROMOTEu were each maintaining their own version of the user — causing 25–35% of support volume. Building a single normalized decision layer above them eliminated the inconsistency entirely.
25-35%
Support volume reduction from unified identity layer
100%
MFA adoption achieved without friction increase
The most expensive problems in enterprise software are not bugs. They are architectural decisions made by default: systems that grew organically, each maintaining its own version of the truth, each making decisions based on incomplete information.
The Identity Decision System at iPROMOTEu is the clearest instantiation of this pattern. Five independent systems: authentication, KYC verification, role management, account status, and device/risk signals: each maintained their own version of the user. When those versions disagreed, the platform made inconsistent decisions: a verified user blocked by a stale status flag, MFA triggering on a trusted device, permissions misaligned with roles. The result was a 25-35% support volume driven entirely by a decision architecture problem masquerading as a support problem.
The fix was not to patch individual systems. It was to build a layer above them: a unified decision layer that aggregated all signals into a single, real-time model that governed access behavior across the entire platform. When every system reads from the same identity model, the platform makes consistent decisions. And consistent decisions build trust.
This is the foundational argument of the white paper: that the architectural pattern enabling scale, reliability, and trust across complex platforms is the decision layer: a normalized, real-time model that sits above fragmented data sources and governs system behavior based on complete information.

Chapter I complete
“Every platform failure is, at its root, a decision failure.”
Next: Chapter II — The Activation Threshold
Activation is not a moment: it is a threshold. The question is not whether a user completed onboarding. It is whether they crossed the behavioral threshold that predicts long-term engagement. Designing for the threshold, not the checklist, is what separates retention systems from onboarding flows.
TL;DR
USAA's 90-day onboarding was redesigned around behavioral thresholds — not step completion. Surfacing the direct deposit prompt at the moment of highest receptivity drove a 12–15% enrollment increase. At Tend, removing post-onboarding KYC friction lifted activation by 27%.
+12-15%
Direct deposit enrollment in 90 days at USAA
+27%
Activation increase at Tend from state-driven onboarding
The conventional onboarding model is a checklist: complete these steps, and the user is onboarded. The problem with this model is that it conflates process completion with behavioral activation. A user who has completed every onboarding step but has not experienced the product's core value is not activated: they are processed.
The USAA onboarding and direct deposit work demonstrates the difference. The 90-day journey was not designed around step completion. It was designed around behavioral thresholds: the moment a member received their first paycheck, the moment they used their debit card for the first time, the moment they reached a balance milestone. Each threshold triggered a specific intervention: not because the system was following a script, but because it knew where the member was in their relationship with the product.
The direct deposit prompt was the critical threshold. Surfacing it at the moment of highest receptivity: after the member had experienced the account's value, with a real-time paycheck simulation showing exactly when funds would arrive: converted at a significantly higher rate than the generic setup prompt it replaced. The 12-15% enrollment increase was not the result of a better prompt. It was the result of a better decision about when to deliver the prompt.
The Tend KYC case study extends this argument to compliance. Compliance interruptions: post-onboarding KYC prompts that were degrading 30-60 day retention: were the product of a system that didn't know the user's state. A static onboarding model doesn't know that a user has already completed verification; it checks the current state and prompts if anything is missing. A state-aware model knows the user's history and surfaces only what's genuinely needed. The 27% activation increase at Tend came from removing friction, not adding features.

Chapter II complete
“Activation is not a moment: it is a threshold.”
Next: Chapter III — The Stickiness Stack
Retention is not a feature. It is a system. The stickiness stack: the layered set of behaviors, integrations, and value moments that make a product indispensable: must be designed deliberately, measured continuously, and optimized empirically. Products that feel sticky were engineered to feel that way.
TL;DR
USAA's Learning Center redesign connected content to behavioral data for the first time — enabling A/B-driven optimization that produced $375K in incremental revenue. At iPROMOTEu, standardizing vendor data ingestion via PromoStandards eliminated a class of product quality failures degrading affiliate stickiness.
+8%
Conversion rate increase from data-driven content system
$375K
Incremental YoY revenue from Learning Center redesign
Franklin Madison's research on stickiness in personal finance identifies four pillars of a sticky customer experience: seamless first impression, proactive engagement, omnichannel continuity, and trust-building through transparency. These pillars are not abstract principles: they are engineering requirements. Each one requires a specific set of system capabilities to deliver consistently at scale.
The USAA Learning Center and Car Buying work demonstrates the stickiness stack in practice. The challenge was not content quality: USAA's editorial team was capable. The challenge was that the content system was not connected to the behavioral data that would tell it which content was driving conversion and which was not. The Tableau dashboard unified auto loan, insurance, and TrueCar car buying data into a single view, enabling content decisions based on the full customer journey. The A/B testing framework then validated those decisions empirically.
The result was not just a 8% conversion rate increase. It was a content system that compounded over time: each A/B test generating signal that informed the next, each piece of content building on the behavioral data that preceded it. The $150K in eliminated development costs was a byproduct of removing the engineering bottleneck that had been preventing the system from compounding.
The PromoStandards Vendor Integration Framework at iPROMOTEu extends the stickiness argument to B2B platforms. Vendor data fragmentation: inconsistent formats, missing fields, incompatible schemas: was creating a class of product quality failures that were degrading the affiliate experience. The integration framework standardized data ingestion, enabling the platform to make consistent product display decisions regardless of vendor data quality. Stickiness in a B2B marketplace is a function of product data quality: and product data quality is a function of the decision layer that governs data ingestion.

Chapter III complete
“Retention is not a feature.”
Next: Chapter IV — The Legacy Trap
Legacy systems are not technical problems. They are organizational ones. The decision to maintain a legacy system is made every day: in every sprint that patches rather than replaces, in every roadmap that defers migration, in every architecture review that accepts the status quo. Escaping the legacy trap requires a different kind of decision: the decision to treat migration as a product problem, not an engineering one.
TL;DR
The Xebra → iSuite migration at iPROMOTEu was reframed as a product problem, not an engineering one — measuring success by affiliate productivity rather than technical milestones. The result: 40% fewer support tickets. The same pattern applies at the data layer: fragmented data prevents intelligence.
40%
Reduction in affiliate support tickets post-migration
3×
Faster decision velocity from unified intelligence platform
The iPROMOTEu platform rationalization: migrating from Xebra to iSuite: is a case study in treating migration as a product problem. The conventional approach to platform migration is to treat it as an engineering project: define the technical requirements, build the migration tooling, execute the cutover. This approach consistently produces the same outcomes: scope creep, timeline overruns, and a user experience that degrades during the transition.
The alternative approach is to treat migration as a product problem: define the user experience requirements, build the migration path around them, and measure success by user outcomes rather than technical milestones. The Xebra to iSuite migration was executed with this framing: the migration path was designed to preserve affiliate workflows, the cutover was staged to minimize disruption, and the success metrics were defined in terms of affiliate productivity rather than technical completion.
The legacy trap is not just a platform problem. It is a data problem. The AI-powered intelligence platform at Productable demonstrates this at the data layer: when data is fragmented across systems, the intelligence layer cannot make good decisions. The platform rationalization at iPROMOTEu and the intelligence platform at Productable are the same problem at different layers of the stack: both require a decision layer that normalizes fragmented inputs into a coherent model before any meaningful intelligence can be applied.

Chapter IV complete
“Legacy systems are not technical problems.”
Next: Chapter V — The Innovation Funnel
Innovation at scale is not a creative problem. It is a systems problem. The question is not how to generate more ideas: most organizations have more ideas than they can execute. The question is how to build a system that consistently identifies the ideas worth pursuing, advances them through a structured process, and kills the ones that don't survive contact with evidence.
TL;DR
Productable and the U.S. Air Force both faced the same problem: too many ideas, no system to evaluate them. A state-driven workflow — where each idea has a defined state, defined actions, and auditable decisions — reduced idea-to-pilot cycle time by 60% and quadrupled the ideas advancing past initial evaluation.
60%
Reduction in idea-to-pilot cycle time
4×
Increase in ideas advancing past initial evaluation
The Productable innovation funnel and the U.S. Air Force innovation work are the same system at different scales. Both required a structured process for managing the full lifecycle of an idea from submission to deployment.
At Productable, the innovation funnel was a product: a platform that helped organizations manage the full lifecycle of an idea from submission to deployment. The design challenge was to make the process feel lightweight enough to encourage participation while structured enough to produce consistent outcomes. The solution was a state-driven workflow: each idea in a defined state, each state with a defined set of actions and decision criteria: that made the process transparent and the decisions auditable.
The U.S. Air Force work applied the same pattern to a government context. The challenge was not idea generation: the Air Force had no shortage of ideas. The challenge was that ideas were being generated in isolation, without a shared framework for evaluation or a structured path to execution. The innovation funnel provided that framework: a consistent evaluation criteria, a structured advancement process, and a recurring cadence that kept ideas moving rather than stalling.
The connective tissue between these two cases and the rest of the white paper is the decision layer. The innovation funnel is a decision system: it takes ideas as inputs, applies evaluation criteria as decision rules, and produces advancement decisions as outputs. When the decision rules are consistent and the process is transparent, the system produces better outcomes than ad hoc evaluation: not because the ideas are better, but because the decisions about which ideas to pursue are better.
TL;DR
Every case study in this paper is an application of the same pattern: a state-aware system that normalizes fragmented inputs before making decisions. The implication for product leaders: ask not 'what does this system do?' but 'what decisions does it make, and what inputs does it use?'
The decision layer is not a specific technology or a specific product. It is an architectural principle: that the most reliable, scalable, and trustworthy systems are those that normalize fragmented inputs into a coherent model before making decisions. Identity signals, behavioral triggers, compliance states, vendor data, innovation criteria: all of these are inputs to decisions. When those inputs are fragmented, the decisions are inconsistent. When they are normalized, the decisions are reliable.
The Red Napkin strategy: Jon Taffer's three-visit retention model from Bar Rescue: is the most intuitive expression of this principle. The strategy works because it is state-aware: the restaurant knows where the customer is in their relationship and delivers the right intervention at the right moment. Visit 1: red napkin (signal that the customer is new, deliver a memorable first impression). Visit 2: chicken discount (recognize the returning customer, deliver value that rewards return). Visit 3: free cheesecake (lock in the habit, create an expectation of ongoing value).
Every case study in this white paper is an application of this pattern at scale. The USAA 90-day onboarding journey is the Red Napkin strategy operationalized across 180 interactions. The Tend state-driven onboarding is the Red Napkin strategy applied to cross-border compliance. The iPROMOTEu identity decision system is the Red Napkin strategy applied to platform access control.
The implication for product leaders is direct: the question to ask of any system is not "what does this system do?" but "what decisions does this system make, and what inputs does it use to make them?" When the inputs are fragmented, the decisions are unreliable. When the inputs are normalized into a decision layer, the system becomes trustworthy: and trustworthy systems are the foundation of products that scale.
Five Principles of Decision-Layer Architecture
Fragmented inputs produce inconsistent decisions. Build the normalization layer first: the decision quality follows from it.
Static flows assume every user follows the same path. State-aware systems know where each user is in their journey and deliver the right intervention at the right moment.
Compliance requirements implemented as feature patches accumulate into technical debt. Implemented as system states, they become a foundation for trust.
A decision system that cannot be audited cannot be improved. Transparency in decision logic is not just a governance requirement: it is a product quality requirement.
Activation is not step completion. Retention is not time-on-platform. Design for the behavioral thresholds that predict long-term value, and measure those.
Evidence Base
Each case study is a chapter in the larger argument. Read them individually or follow the connective tissue across all 10.
This case study establishes the foundational argument of the white paper: that identity is not a data problem but a decision problem. The unified identity layer is the first instantiation of the core thesis -- that building a decision layer above fragmented data sources is the architectural pattern that enables scale, reliability, and trust across complex platforms.
MFA rollout is a microcosm of the broader white paper argument: that system design determines adoption outcomes. The rollout succeeded because the decision layer (identity system) was built first, enabling the experience layer (device recognition, contextual exemption) to make the right decisions automatically. Security without friction is a systems design achievement, not a security achievement.
The payment portal demonstrates the white paper's argument that decision systems create revenue when they are positioned at high-intent moments. The upsell logic is a decision layer: it takes order context, customer history, and product affinity as inputs and produces a personalized offer as output. Centralizing that decision at the checkout moment -- rather than leaving it to individual affiliates -- is what unlocked the 5-8% conversion lift.
The Xebra rationalization illustrates the white paper's argument about system coherence: that decision systems require a single source of truth. When two systems maintain conflicting state, every decision that depends on that state is potentially wrong. Rationalization is not a technical exercise -- it is the prerequisite for reliable decision-making at scale.
The PromoStandards framework demonstrates the white paper's argument about normalization as the foundation for scalable decision systems. A decision system can only be as consistent as the data it reads from. By normalizing vendor data at ingestion, I created a foundation where every downstream decision -- product display, order routing, status communication -- could be made consistently regardless of vendor data quality.
The innovation funnel case study demonstrates that the decision layer pattern applies beyond product platforms to organizational processes. The stage-gated lifecycle is a decision system: it takes ideas as inputs, applies evaluation criteria at each stage, and produces advancement decisions as outputs. Building that system with clear inputs, consistent processes, and sustained engagement is the same architectural challenge as building a product decision layer.
The Tend case study demonstrates the white paper's argument about state-driven design as the foundation for both compliance and retention. A state-aware system makes better decisions because it knows where the user is in their journey. The compliance interruptions that were degrading retention were the product of a system that didn't know the user's state -- and the fix was to build a system that did.
The USAA onboarding case study demonstrates the white paper's argument about behavioral thresholds as the design target for activation systems. The 90-day journey was not designed around step completion -- it was designed around behavioral thresholds: the moments that predict long-term engagement. The direct deposit prompt was the critical threshold, and the paycheck simulation was the decision support tool that helped members cross it.
The Learning Center case study demonstrates the white paper's argument about data normalization as a prerequisite for content decision systems. The Tableau dashboard unified fragmented data sources into a single view, enabling content decisions based on the full customer journey. The A/B testing framework then validated those decisions empirically. The pattern -- normalize data, identify decision moments, test and optimize -- is the same pattern that appears across all 10 case studies.
The A/B testing framework is the white paper's argument about empirical validation made concrete. A decision system is only as good as the quality of its inputs. The testing framework was the mechanism for improving input quality -- replacing assumptions with validated behavioral data. The delayed value reveal finding is a specific example of how empirical testing can overturn conventional wisdom and produce better decisions at scale.
The Identity & Compliance Framework is the white paper's argument about state-driven design applied to its highest-stakes domain: compliance. When identity state is centralized and real-time, compliance requirements become decisions rather than interruptions. The three engagements documented here — Tend KYC, iPROMOTEu MFA, iPROMOTEu address verification — are the empirical evidence for that argument.
The Operating System
ibuildsystems.io
Four frameworks. One repeatable system. Applied across banking, fintech, government, and B2B SaaS to turn broken workflows into scalable revenue engines.