Executive AI Dinner hosted by Kumo - Austin, April 8

Register here
2Binary Classification · Win-Back

Win-Back Targeting

Which churned customers will return and make a purchase in the next 60 days?

Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.

By submitting, you accept the Terms and Privacy Policy.

Loved by data scientists, ML engineers & CXOs at

Catalina Logo

A real-world example

Which churned customers will return and make a purchase in the next 60 days?

Most win-back campaigns blast every churned customer with the same offer, wasting budget on customers who will never return while under-investing in those who would. Knowing which churned customers are persuadable turns a 2% response rate into 12%. For a retailer with 1M lapsed customers, that precision saves $8M in wasted campaign spend per year.

Quick answer

Win-back targeting predicts which lapsed customers are most likely to return and make a purchase if contacted. The best models filter to truly dormant customers (zero orders in 180+ days), then score re-engagement probability using prior campaign responses, lifetime value patterns, and signals from similar customers who already returned.

Approaches compared

4 ways to solve this problem

1. Blanket Campaign (No Model)

Send the same win-back offer to every churned customer. Simple to execute, zero data science required. The standard approach for most marketing teams.

Best for

Small customer bases where the cost of sending is negligible and segmentation overhead is not justified.

Watch out for

Typical response rates sit at 1-3%. You waste budget on customers who will never return and annoy them in the process. Over-mailing drives unsubscribes.

2. RFM Segmentation

Score churned customers by recency, frequency, and monetary value from their active period. Target the highest-scoring segments with escalating offers.

Best for

Teams without ML infrastructure who want a step up from blanket campaigns. Works well when historical purchase behavior is a strong signal.

Watch out for

RFM only uses three dimensions from a single table. It misses campaign response history, seasonal patterns, and what similar customers in the same cohort are doing.

3. Traditional ML (Gradient Boosted Trees)

Train a binary classifier on hand-engineered features: days since last purchase, total LTV, number of past campaign responses, product categories purchased.

Best for

Teams with ML engineers and a clean feature pipeline. Solid baseline when the feature set is well-curated.

Watch out for

Feature engineering takes weeks. The model cannot see cross-customer signals like 'VIP customers in this cohort are returning at 68%' because each customer is scored independently on a flat row.

4. KumoRFM (Graph Neural Networks on Relational Data)

Connects customers, orders, campaigns, and product data into a relational graph. The GNN learns dormant-customer reactivation signals from the full network, including which similar customers returned and what campaigns triggered their return.

Best for

Retailers and subscription businesses with multi-table data who want to maximize win-back precision without building feature pipelines.

Watch out for

Requires historical campaign and order data with timestamps. If you have never run win-back campaigns before, the model has less signal on campaign responsiveness.

Key metric: Multi-table relational models achieve 91% accuracy on SAP's SALT benchmark vs 75% for single-table models, directly translating to higher precision in win-back targeting.

Why relational data changes the answer

A flat win-back model sees Dana Lee as a row: VIP segment, $4,230 LTV, 236 days since last order, 3 of 5 campaign emails opened. That is useful but incomplete. The relational graph reveals that 68% of VIP customers in Dana's cohort who received the same 20% offer already returned. It shows that Dana's purchase pattern matches spring seasonal buyers, and her last three orders came through the online channel, which has a 2.3x higher reactivation rate than in-store.

These cross-customer and cross-table signals are invisible to any model that treats each customer as an independent row. The graph neural network propagates information from returned customers to similar still-dormant customers, from successful campaigns to untried channels, and from seasonal purchase patterns to timing decisions. On SAP's SALT benchmark, models with access to multi-table relational signals score 91% accuracy vs 75% for single-table models and 63% for rule-based approaches. That gap matters when you are deciding which of 1M lapsed customers deserve a $20 win-back offer.

Imagine trying to predict which alumni will attend a college reunion by only looking at their graduation year and major. Now imagine you can also see which of their college friends already RSVP'd, whether they attended the last reunion, and whether their fraternity is organizing a table. The individual attributes get you started, but the social graph is what actually drives the decision. Win-back targeting works the same way.

How KumoRFM solves this

Relational intelligence for customer retention

Kumo filters to customers with zero orders in the past 180 days, then predicts which will re-purchase within 60 days. The graph captures dormant signals traditional models miss — similar customers who returned, prior campaign responses from connected accounts, and seasonal purchase patterns across the entire customer network.

From data to predictions

See the full pipeline in action

Connect your tables, write a PQL query, and get predictions with built-in explainability — all in minutes, not months.

1

Your data

The relational tables Kumo learns from

CUSTOMERS

customer_idnamesegmentlast_order_date
C101Dana LeeVIP2024-07-12
C102Eli BrooksStandard2024-08-30
C103Fiona DiazVIP2024-06-05

ORDERS

order_idcustomer_idamountchanneltimestamp
O5001C101$245.00Online2024-07-12
O5002C102$89.50In-store2024-08-30
O5003C103$312.00Online2024-06-05

CAMPAIGNS

campaign_idcustomer_idoffer_typesent_date
CMP401C10120% off2025-01-15
CMP402C102Free shipping2025-01-20
CMP403C103Loyalty bonus2025-02-01
2

Write your PQL query

Describe what to predict in 2–3 lines — Kumo handles the rest

PQL
PREDICT COUNT(ORDERS.*, 0, 60, days) > 0
FOR EACH CUSTOMERS.CUSTOMER_ID
WHERE COUNT(ORDERS.*, -180, 0, days) = 0
3

Prediction output

Every entity gets a score, updated continuously

CUSTOMER_IDTIMESTAMPTARGET_PREDTrue_PROB
C1012025-03-05True0.71
C1022025-03-05False0.15
C1032025-03-05True0.64
4

Understand why

Every prediction includes feature attributions — no black boxes

Customer C101 — Dana Lee

Predicted: True (71% win-back probability)

Top contributing features

Prior campaign response rate

3 of 5 opened

31% attribution

Similar VIP customers returning

68% of cohort

24% attribution

Lifetime order value

$4,230

19% attribution

Days since last order

236 days

15% attribution

Seasonal purchase pattern match

Spring buyer

11% attribution

Feature attributions are computed automatically for every prediction. No separate tooling required. Learn more about Kumo explainability

Frequently asked questions

Common questions about win-back targeting

What is the difference between win-back and reactivation targeting?

Win-back targets customers who have fully churned (zero activity for an extended period). Reactivation targets dormant users who still have an account or subscription but stopped engaging. The signals and time horizons differ: win-back typically uses 60-day prediction windows on 180+ day dormant customers, while reactivation uses 14-day windows on 30-90 day dormant users.

How long should you wait before running a win-back campaign?

The optimal timing depends on your product's natural purchase cycle. For monthly subscriptions, 60-90 days of inactivity is a reasonable threshold. For seasonal retailers, 180 days accounts for normal buying patterns. The key is using a backward-window filter to define 'dormant' precisely, so you are not wasting offers on customers who are just between purchases.

What win-back response rate is considered good?

Untargeted campaigns typically see 1-3% response rates. Well-segmented campaigns reach 5-8%. AI-targeted campaigns that score individual customers by return probability can hit 10-15% on the top decile. The goal is not to improve the overall rate but to concentrate budget on the customers most likely to respond.

Should you offer bigger discounts to higher-value churned customers?

Not necessarily. High-LTV customers often return without a large discount if the timing is right. The model should predict both who will return and what offer type works best. Giving a 30% discount to a customer who would have returned with a simple reminder destroys margin for no reason.

Bottom line: A retailer with 1M lapsed customers can lift win-back response rates from 2% to 12% by targeting only high-probability returners — saving $8M in wasted campaign spend and recovering $22M in dormant revenue.

Topics covered

win-back targeting AIchurned customer reactivationcustomer win-back modelre-engagement predictionlapsed customer MLgraph neural network retentionKumoRFM win-backrelational deep learningcustomer lifecycle predictionbinary classification win-backcampaign targeting AI

One Platform. One Model. Infinite Predictions.

KumoRFM

Relational Foundation Model

Turn structured relational data into predictions in seconds. KumoRFM delivers zero-shot predictions that rival months of traditional data science. No training, feature engineering, or infrastructure required. Just connect your data and start predicting.

For critical use cases, fine-tune KumoRFM on your data using the Kumo platform and Research Agent for 30%+ higher accuracy than traditional models.

Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.