When I was working on the Life Event Mobility Engine for C-4 Analytics, I kept running into a version of the same conversation with engineers.
"What should the weight be for a lease expiration trigger?"
My first instinct was to say "I do not know: what does the data say?" That is the right instinct for a lot of product decisions. Let the data decide.
But for a scoring model, "what does the data say?" is not a complete answer. Because the data tells you what happened in the past. The scoring model is a bet on what will happen in the future. And that bet requires judgment, not just analysis.
What a Scoring Model Actually Is
The Life Event Mobility Engine is built on a three-level scoring architecture. Level one assigns base scores to individual data signals: a lease expiration, a repeated service visit, a move to a new address, a new child. Level two applies persona-weighted multipliers: the same signal matters more for some buyer types than others. Level three looks for clusters of signals within a 90-day window and routes users into specific campaigns when their cumulative score crosses a threshold.
Each of those levels is a product decision.
The base scores are a statement about which signals are most predictive of purchase intent. A lease expiration gets a high score because it is a hard deadline: the customer has to do something. A move to a new address gets a moderate score because it correlates with vehicle needs but does not create urgency. A repeated service visit gets a high score because it signals that the vehicle is aging and the customer is aware of it.
These are not arbitrary numbers. They are hypotheses about human behavior. And they need to be tested, updated, and refined as the model accumulates data.
The Persona Weighting Problem
The most interesting design challenge in the scoring model was the persona weighting. The same signal means different things for different buyer types.
A "suburban move" is a massive trigger for an M2 persona: the Established Professional who needs a larger SUV for a growing family. It is largely irrelevant for an M1 persona: the Executive who buys a Maserati because of a business exit, not because they moved to the suburbs.
Getting the persona weights right required a combination of demographic data, purchase history analysis, and genuine domain expertise about what motivates buyers in each segment. It required conversations with people who had sold luxury vehicles for twenty years and could tell you, from experience, that a Range Rover buyer's decision is almost always triggered by a career milestone, not a life logistics change.
That is the kind of knowledge that does not live in a database. It lives in people. And extracting it: turning it into weights and thresholds that a machine can use: is the product work that makes the scoring model valuable.
The Campaign Activation Ladder
The third level of the model: the campaign activation ladder: is where the scoring model becomes a marketing product.
The ladder works like this: as a user's cumulative Purchase Propensity Score increases, the marketing action escalates. At 10-25 points, you are in awareness mode: brand content, lifestyle imagery. At 26-50 points, you are in consideration mode: retargeting, model comparisons, payment calculators. At 51-75 points, you are in high-intent mode: inventory push, trade-in offers, dealer visit incentives. At 76-90 points, you are in purchase-ready mode: personalized offers, BDC outreach, financing pre-approval. At 91+, you are in urgent mode: immediate BDC call, conquest offer, service-to-sales migration.
Each threshold is a product decision. Set it too low, and you are spending conquest budget on buyers who are not ready. Set it too high, and you are missing buyers who are.
The right thresholds are different for different OEM programs, different dealer segments, and different market conditions. A luxury dealer in a coastal market has different conversion economics than a volume dealer in the Midwest. The model needs to be calibrated to those differences.
What Makes a Scoring Model Good
A good scoring model is not one that is technically sophisticated. It is one that is calibrated to the business it serves.
That calibration requires three things: good data (the signals have to be real and current), good domain knowledge (the weights have to reflect how buyers actually behave), and good feedback loops (the model has to learn from its predictions and update accordingly).
The feedback loop is the part that most scoring models get wrong. They are built, deployed, and then left to run. The weights that were set at launch are still the weights two years later, even though the market has changed and the model has accumulated data that could improve them.
A scoring model that does not learn is not a product. It is a static rule set dressed up as intelligence. The product work is building the feedback loop: the mechanism that connects campaign outcomes back to model weights and keeps the model calibrated over time.
That is the difference between a scoring model that is a technical artifact and one that is a genuine product.