Executive AI Dinner hosted by Kumo - Austin, April 8

Register here
7Counterfactual · Reactivation

Reactivation Targeting

Among dormant users, which will reactivate if we send a personalized offer?

Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.

By submitting, you accept the Terms and Privacy Policy.

Loved by data scientists, ML engineers & CXOs at

Catalina Logo

A real-world example

Among dormant users, which will reactivate if we send a personalized offer?

Sending reactivation offers to all dormant users is wasteful — most would never return regardless, and some would return without an offer. What you need is the incremental lift: users who will reactivate because of the offer and would not have otherwise. For a platform with 3M dormant users, targeting only the persuadable segment saves $4M in offer costs while doubling reactivation rates.

Quick answer

Reactivation targeting uses counterfactual prediction to identify which dormant users will reactivate because of a personalized offer and would not have reactivated otherwise. The ASSUMING clause in PQL compares predicted behavior with and without the offer, measuring true incremental uplift rather than just response probability.

Approaches compared

4 ways to solve this problem

1. Blanket Reactivation Offers

Send the same offer to all dormant users. Zero targeting, maximum reach. The default approach for most product and marketing teams.

Best for

Small dormant user populations where the cost per offer is very low and segmentation is not worth the effort.

Watch out for

Most dormant users will never return regardless of the offer. You waste budget on 'never-returners' and give unnecessary discounts to 'always-returners' who would come back on their own.

2. Propensity-Based Targeting

Train a model to predict which dormant users will reactivate, then target the highest-scoring users. Standard binary classification on historical features.

Best for

Teams that want a step up from blanket campaigns. Simple to implement with any ML framework.

Watch out for

Propensity models predict who will return, not who will return because of the offer. High-propensity users may be 'always-returners' who would come back without any incentive. You waste budget on them while missing the persuadable middle segment.

3. A/B Testing with Holdout Groups

Run controlled experiments: send offers to a treatment group, withhold from a control group, measure the difference. The gold standard for causal inference.

Best for

Validating offer effectiveness and measuring true incrementality. Required for CFO-level proof of ROI.

Watch out for

Takes 4-8 weeks per experiment cycle. Cannot personalize at the individual level since you test segments, not users. By the time you have results, the dormant users may be further gone.

4. KumoRFM (Counterfactual Graph Prediction)

Uses the ASSUMING clause to predict reactivation probability both with and without an offer for each user. The difference is the true uplift, the incremental effect of the offer. Learns from the relational graph of past offer responses, user behavior, and social connections.

Best for

Platforms with millions of dormant users where individual-level uplift targeting can save millions in offer costs while doubling reactivation rates.

Watch out for

Requires historical data on past offer campaigns and their outcomes. If you have never sent reactivation offers before, consider running A/B tests first to generate training data.

Key metric: Counterfactual targeting doubles reactivation rates vs blanket campaigns while cutting offer costs by 60-75%, because it concentrates budget on the persuadable segment that flat propensity models cannot identify.

Why relational data changes the answer

User U701 has a Pro plan, was active for 35-minute sessions before going dormant 110 days ago, and redeemed 4 of 6 prior offers. A flat propensity model scores them as 'likely to reactivate' based on these features. But is the reactivation because of the offer or despite it? The counterfactual question requires relational context.

The graph reveals that 3 of U701's 5 connected users recently reactivated, suggesting social pull. Their pre-dormancy engagement was deep (35 min/session), indicating they valued the product. The gap between their plan value and usage at churn was high, meaning they were paying for features they stopped using rather than features that were not useful. These compound signals let the model separate the offer's causal effect from the organic return probability. On the SAP SALT benchmark, relational models with multi-table signals achieve 91% accuracy vs 75% for single-table approaches. For counterfactual prediction specifically, the relational advantage is even larger because uplift estimation requires understanding the context around each user, not just the user's own attributes.

Imagine a gym offering free month passes to lapsed members. Some would have come back anyway when New Year's resolutions kick in (always-returners). Some will never return regardless (lost causes). The valuable targets are the ones in the middle who need a nudge. Counterfactual prediction is like having a crystal ball that shows two futures for each person: one where they get the free month, one where they do not. The difference between those futures is the true value of the offer.

How KumoRFM solves this

Relational intelligence for customer retention

Kumo's ASSUMING clause enables counterfactual prediction — comparing the probability of reactivation with and without a personalized offer. The difference is the true uplift. By learning from the relational graph of past offer responses, user behavior patterns, and social connections, Kumo identifies the persuadable segment that traditional A/B testing takes months to find.

From data to predictions

See the full pipeline in action

Connect your tables, write a PQL query, and get predictions with built-in explainability — all in minutes, not months.

1

Your data

The relational tables Kumo learns from

USERS

user_idplanlast_active_date
U701Pro2024-11-15
U702Basic2024-10-02
U703Pro2024-12-08

SESSIONS

session_iduser_idduration_mintimestamp
S7001U701352024-11-15
S7002U70282024-10-02
S7003U703522024-12-08

OFFERS

offer_iduser_idtypediscount_pcttimestamp
OF901U701reactivation30%2025-01-10
OF902U702reactivation20%2025-01-15
OF903U703reactivation25%2025-02-01
2

Write your PQL query

Describe what to predict in 2–3 lines — Kumo handles the rest

PQL
PREDICT COUNT(SESSIONS.*, 0, 14, days) > 0
FOR EACH USERS.USER_ID
WHERE COUNT(SESSIONS.*, -90, 0, days) = 0
ASSUMING COUNT(OFFERS.*
    WHERE OFFERS.TYPE = 'reactivation',
    0, 1, days) > 0
3

Prediction output

Every entity gets a score, updated continuously

USER_IDTrue_PROB (with offer)True_PROB (without)UPLIFT
U7010.680.22+0.46
U7020.190.15+0.04
U7030.550.41+0.14
4

Understand why

Every prediction includes feature attributions — no black boxes

User U701 — Pro plan

Predicted: +0.46 uplift (high persuadability)

Top contributing features

Prior offer response rate

4 of 6 redeemed

30% attribution

Pre-dormancy engagement level

35 min/session

24% attribution

Connected users who reactivated

3 of 5

20% attribution

Days dormant

110 days

15% attribution

Plan value vs usage at churn

High gap

11% attribution

Feature attributions are computed automatically for every prediction. No separate tooling required. Learn more about Kumo explainability

Frequently asked questions

Common questions about reactivation targeting

What is the difference between propensity scoring and uplift modeling?

Propensity scoring predicts who will take an action (reactivate). Uplift modeling predicts who will take an action because of your intervention (reactivate because of the offer). The distinction matters enormously for ROI: targeting high-propensity users wastes budget on people who would have acted anyway. Targeting high-uplift users maximizes incremental return.

How does counterfactual prediction work without running an experiment?

Kumo's ASSUMING clause creates a counterfactual scenario in the model: it predicts reactivation probability assuming the offer was sent, then predicts again assuming it was not. The model learns the causal relationship from historical data where some users received offers and some did not. This is not correlation since it uses learned treatment effects from the relational graph.

What is a good uplift score for reactivation targeting?

An uplift of +0.20 or higher (20 percentage points) typically justifies a reactivation offer. Users with uplift below +0.05 are either always-returners or never-returners and should be excluded. The sweet spot is users with moderate baseline probability (0.15-0.40) and high uplift (+0.20 to +0.50).

How much can you save by targeting only persuadable users?

For a platform with 3M dormant users, targeting only the persuadable segment (typically 15-25% of dormant users) saves 60-75% of offer costs while maintaining or improving total reactivations. The savings come from not sending offers to always-returners and never-returners.

Bottom line: A platform with 3M dormant users that targets only the persuadable segment saves $4M in offer costs while doubling reactivation rates — turning counterfactual prediction into measurable incremental revenue.

Topics covered

reactivation targeting AIcounterfactual prediction MLdormant user reactivationuplift modelingcausal inference reactivationgraph neural network counterfactualKumoRFM reactivationrelational deep learningpersonalized offer targetingASSUMING PQLtreatment effect prediction

One Platform. One Model. Infinite Predictions.

KumoRFM

Relational Foundation Model

Turn structured relational data into predictions in seconds. KumoRFM delivers zero-shot predictions that rival months of traditional data science. No training, feature engineering, or infrastructure required. Just connect your data and start predicting.

For critical use cases, fine-tune KumoRFM on your data using the Kumo platform and Research Agent for 30%+ higher accuracy than traditional models.

Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.