Win-Back Targeting
“Which churned customers will return and make a purchase in the next 60 days?”
Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.
By submitting, you accept the Terms and Privacy Policy.

Loved by data scientists, ML engineers & CXOs at

A real-world example
Which churned customers will return and make a purchase in the next 60 days?
Most win-back campaigns blast every churned customer with the same offer, wasting budget on customers who will never return while under-investing in those who would. Knowing which churned customers are persuadable turns a 2% response rate into 12%. For a retailer with 1M lapsed customers, that precision saves $8M in wasted campaign spend per year.
Quick answer
Win-back targeting predicts which lapsed customers are most likely to return and make a purchase if contacted. The best models filter to truly dormant customers (zero orders in 180+ days), then score re-engagement probability using prior campaign responses, lifetime value patterns, and signals from similar customers who already returned.
Approaches compared
4 ways to solve this problem
1. Blanket Campaign (No Model)
Send the same win-back offer to every churned customer. Simple to execute, zero data science required. The standard approach for most marketing teams.
Best for
Small customer bases where the cost of sending is negligible and segmentation overhead is not justified.
Watch out for
Typical response rates sit at 1-3%. You waste budget on customers who will never return and annoy them in the process. Over-mailing drives unsubscribes.
2. RFM Segmentation
Score churned customers by recency, frequency, and monetary value from their active period. Target the highest-scoring segments with escalating offers.
Best for
Teams without ML infrastructure who want a step up from blanket campaigns. Works well when historical purchase behavior is a strong signal.
Watch out for
RFM only uses three dimensions from a single table. It misses campaign response history, seasonal patterns, and what similar customers in the same cohort are doing.
3. Traditional ML (Gradient Boosted Trees)
Train a binary classifier on hand-engineered features: days since last purchase, total LTV, number of past campaign responses, product categories purchased.
Best for
Teams with ML engineers and a clean feature pipeline. Solid baseline when the feature set is well-curated.
Watch out for
Feature engineering takes weeks. The model cannot see cross-customer signals like 'VIP customers in this cohort are returning at 68%' because each customer is scored independently on a flat row.
4. KumoRFM (Graph Neural Networks on Relational Data)
Connects customers, orders, campaigns, and product data into a relational graph. The GNN learns dormant-customer reactivation signals from the full network, including which similar customers returned and what campaigns triggered their return.
Best for
Retailers and subscription businesses with multi-table data who want to maximize win-back precision without building feature pipelines.
Watch out for
Requires historical campaign and order data with timestamps. If you have never run win-back campaigns before, the model has less signal on campaign responsiveness.
Key metric: Multi-table relational models achieve 91% accuracy on SAP's SALT benchmark vs 75% for single-table models, directly translating to higher precision in win-back targeting.
Why relational data changes the answer
A flat win-back model sees Dana Lee as a row: VIP segment, $4,230 LTV, 236 days since last order, 3 of 5 campaign emails opened. That is useful but incomplete. The relational graph reveals that 68% of VIP customers in Dana's cohort who received the same 20% offer already returned. It shows that Dana's purchase pattern matches spring seasonal buyers, and her last three orders came through the online channel, which has a 2.3x higher reactivation rate than in-store.
These cross-customer and cross-table signals are invisible to any model that treats each customer as an independent row. The graph neural network propagates information from returned customers to similar still-dormant customers, from successful campaigns to untried channels, and from seasonal purchase patterns to timing decisions. On SAP's SALT benchmark, models with access to multi-table relational signals score 91% accuracy vs 75% for single-table models and 63% for rule-based approaches. That gap matters when you are deciding which of 1M lapsed customers deserve a $20 win-back offer.
Imagine trying to predict which alumni will attend a college reunion by only looking at their graduation year and major. Now imagine you can also see which of their college friends already RSVP'd, whether they attended the last reunion, and whether their fraternity is organizing a table. The individual attributes get you started, but the social graph is what actually drives the decision. Win-back targeting works the same way.
How KumoRFM solves this
Relational intelligence for customer retention
Kumo filters to customers with zero orders in the past 180 days, then predicts which will re-purchase within 60 days. The graph captures dormant signals traditional models miss — similar customers who returned, prior campaign responses from connected accounts, and seasonal purchase patterns across the entire customer network.
From data to predictions
See the full pipeline in action
Connect your tables, write a PQL query, and get predictions with built-in explainability — all in minutes, not months.
Your data
The relational tables Kumo learns from
CUSTOMERS
| customer_id | name | segment | last_order_date |
|---|---|---|---|
| C101 | Dana Lee | VIP | 2024-07-12 |
| C102 | Eli Brooks | Standard | 2024-08-30 |
| C103 | Fiona Diaz | VIP | 2024-06-05 |
ORDERS
| order_id | customer_id | amount | channel | timestamp |
|---|---|---|---|---|
| O5001 | C101 | $245.00 | Online | 2024-07-12 |
| O5002 | C102 | $89.50 | In-store | 2024-08-30 |
| O5003 | C103 | $312.00 | Online | 2024-06-05 |
CAMPAIGNS
| campaign_id | customer_id | offer_type | sent_date |
|---|---|---|---|
| CMP401 | C101 | 20% off | 2025-01-15 |
| CMP402 | C102 | Free shipping | 2025-01-20 |
| CMP403 | C103 | Loyalty bonus | 2025-02-01 |
Write your PQL query
Describe what to predict in 2–3 lines — Kumo handles the rest
PREDICT COUNT(ORDERS.*, 0, 60, days) > 0 FOR EACH CUSTOMERS.CUSTOMER_ID WHERE COUNT(ORDERS.*, -180, 0, days) = 0
Prediction output
Every entity gets a score, updated continuously
| CUSTOMER_ID | TIMESTAMP | TARGET_PRED | True_PROB |
|---|---|---|---|
| C101 | 2025-03-05 | True | 0.71 |
| C102 | 2025-03-05 | False | 0.15 |
| C103 | 2025-03-05 | True | 0.64 |
Understand why
Every prediction includes feature attributions — no black boxes
Customer C101 — Dana Lee
Predicted: True (71% win-back probability)
Top contributing features
Prior campaign response rate
3 of 5 opened
31% attribution
Similar VIP customers returning
68% of cohort
24% attribution
Lifetime order value
$4,230
19% attribution
Days since last order
236 days
15% attribution
Seasonal purchase pattern match
Spring buyer
11% attribution
Feature attributions are computed automatically for every prediction. No separate tooling required. Learn more about Kumo explainability
PQL Documentation
Learn the Predictive Query Language — SQL-like syntax for defining any prediction task in 2–3 lines.
Python SDK
Integrate Kumo predictions into your pipelines. Train, evaluate, and deploy models programmatically.
Explainability Docs
Understand feature attributions, model evaluation metrics, and how to build trust with stakeholders.
Frequently asked questions
Common questions about win-back targeting
What is the difference between win-back and reactivation targeting?
Win-back targets customers who have fully churned (zero activity for an extended period). Reactivation targets dormant users who still have an account or subscription but stopped engaging. The signals and time horizons differ: win-back typically uses 60-day prediction windows on 180+ day dormant customers, while reactivation uses 14-day windows on 30-90 day dormant users.
How long should you wait before running a win-back campaign?
The optimal timing depends on your product's natural purchase cycle. For monthly subscriptions, 60-90 days of inactivity is a reasonable threshold. For seasonal retailers, 180 days accounts for normal buying patterns. The key is using a backward-window filter to define 'dormant' precisely, so you are not wasting offers on customers who are just between purchases.
What win-back response rate is considered good?
Untargeted campaigns typically see 1-3% response rates. Well-segmented campaigns reach 5-8%. AI-targeted campaigns that score individual customers by return probability can hit 10-15% on the top decile. The goal is not to improve the overall rate but to concentrate budget on the customers most likely to respond.
Should you offer bigger discounts to higher-value churned customers?
Not necessarily. High-LTV customers often return without a large discount if the timing is right. The model should predict both who will return and what offer type works best. Giving a 30% discount to a customer who would have returned with a simple reminder destroys margin for no reason.
Bottom line: A retailer with 1M lapsed customers can lift win-back response rates from 2% to 12% by targeting only high-probability returners — saving $8M in wasted campaign spend and recovering $22M in dormant revenue.
Related use cases
Explore more retention use cases
Topics covered
One Platform. One Model. Infinite Predictions.
KumoRFM
Relational Foundation Model
Turn structured relational data into predictions in seconds. KumoRFM delivers zero-shot predictions that rival months of traditional data science. No training, feature engineering, or infrastructure required. Just connect your data and start predicting.
For critical use cases, fine-tune KumoRFM on your data using the Kumo platform and Research Agent for 30%+ higher accuracy than traditional models.
Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.




