Executive AI Dinner hosted by Kumo - Austin, April 8

Register here
3Regression · Engagement Scoring

Engagement Scoring

For each user, what will their total engagement hours be over the next 30 days?

Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.

By submitting, you accept the Terms and Privacy Policy.

Loved by data scientists, ML engineers & CXOs at

Catalina Logo

A real-world example

For each user, what will their total engagement hours be over the next 30 days?

Binary churn models tell you who might leave but not how engaged they are. A user logging in once a month looks "active" but is already disengaging. Continuous engagement scores let product and CS teams intervene on a gradient — before the binary signal fires. For a SaaS platform with 200K users, a 10% lift in engagement correlates to $18M in upsell revenue.

Quick answer

Engagement scoring predicts a continuous measure of how much each user will interact with your product over a future time window. Unlike binary churn models that only flag 'will leave' or 'will stay,' engagement scores reveal the full spectrum from power user to disengaging, letting teams intervene on a gradient before the binary signal fires.

Approaches compared

4 ways to solve this problem

1. Rule-Based Health Scores

Assign weighted points for logins, feature usage, and support tickets. Sum the points into a 0-100 health score. The standard approach in most customer success platforms.

Best for

Teams that need a quick, interpretable score without ML infrastructure. Good for initial segmentation.

Watch out for

Weights are guessed, not learned. A login counts the same whether the user spent 2 seconds or 2 hours. The score cannot capture interaction effects between usage decline and support escalation.

2. Traditional ML Regression

Train a regression model (XGBoost, linear regression) on features like session count, feature adoption rate, and ticket volume to predict future engagement hours.

Best for

Teams with ML pipelines and clean feature stores. Solid performance when the right features are pre-computed.

Watch out for

Feature engineering is the bottleneck. You need to manually define and maintain dozens of aggregated features. The model treats each user independently, missing company-wide engagement trends.

3. Time-Series Models (Prophet, ARIMA)

Model each user's engagement as a time series with trend, seasonality, and holiday components. Forecast future engagement based on historical patterns.

Best for

Users with long, consistent engagement histories where temporal patterns dominate.

Watch out for

Fails for users with short histories. Cannot incorporate cross-entity signals like 'the user's entire company is disengaging' or 'an open support ticket is suppressing usage.'

4. KumoRFM (Graph Neural Networks on Relational Data)

Connects users, sessions, support tickets, and product features into a relational graph. Predicts continuous engagement hours by learning from session depth, feature adoption sequences, and how engagement trends propagate through organizational graphs.

Best for

SaaS and product teams with multi-table data who want to capture compound engagement dynamics without manual feature engineering.

Watch out for

Best suited for products with meaningful relational structure (teams, organizations, shared workspaces). A purely single-player product with no cross-user signals benefits less from the graph.

Key metric: Graph-based engagement prediction scores 76.71 vs 62.44 for flat-table models on RelBench benchmarks, with the gap widening for users in multi-seat accounts where organizational signals matter most.

Why relational data changes the answer

User U202 at Bolt Inc logs in once a week and views the dashboard. A rule-based health score might mark them as 'moderate.' But the relational picture tells a different story: U202 has an open high-priority support ticket that has been unresolved for 8 days. Their company-wide engagement is declining. They use only 1.2 features per session compared to 4.5 for healthy users. And they have not made an API call in 18 days, even though they were a heavy API user three months ago.

No single metric captures this compound decline. The session count alone looks acceptable. The support ticket alone could be a one-off. But the graph neural network sees all of these signals simultaneously, learns how they interact, and recognizes this exact pattern from hundreds of other users who eventually churned. On the RelBench benchmark, graph-based engagement prediction models score 76.71 compared to 62.44 for flat-table baselines. That 23% accuracy gap translates directly into more precise intervention: instead of treating all 'medium engagement' users the same, you can distinguish between users on a stable plateau and users in an accelerating decline.

Think of engagement scoring like weather forecasting. Looking at temperature alone (logins) gives you a rough idea. But a real forecast requires barometric pressure (support tickets), humidity (feature adoption depth), wind patterns (team-wide trends), and satellite imagery (cross-company signals). Each data source alone is noisy; the relational model combines them into a forecast that is far more accurate than any single indicator.

How KumoRFM solves this

Relational intelligence for customer retention

Kumo predicts a continuous engagement score — total session hours over the next 30 days — by learning from session depth, feature adoption sequences, support ticket patterns, and how engagement spreads through organizational graphs. Unlike rule-based health scores, Kumo captures the compound dynamics that precede engagement shifts.

From data to predictions

See the full pipeline in action

Connect your tables, write a PQL query, and get predictions with built-in explainability — all in minutes, not months.

1

Your data

The relational tables Kumo learns from

USERS

user_idplansignup_datecompany
U201Enterprise2024-02-10Acme Corp
U202Pro2024-06-15Bolt Inc
U203Enterprise2023-11-01Crest Labs

SESSIONS

session_iduser_idduration_minfeatures_usedtimestamp
S3001U20145dashboard,reports2025-02-28
S3002U20212dashboard2025-03-01
S3003U20368reports,api,exports2025-03-02

SUPPORT_TICKETS

ticket_iduser_idprioritystatustimestamp
T701U201LowResolved2025-02-15
T702U202HighOpen2025-02-28
T703U203MediumResolved2025-01-20
2

Write your PQL query

Describe what to predict in 2–3 lines — Kumo handles the rest

PQL
PREDICT SUM(SESSIONS.DURATION_MIN, 0, 30, days)
FOR EACH USERS.USER_ID
3

Prediction output

Every entity gets a score, updated continuously

USER_IDTIMESTAMPTARGET_PRED
U2012025-03-0522.4 hrs
U2022025-03-053.1 hrs
U2032025-03-0538.7 hrs
4

Understand why

Every prediction includes feature attributions — no black boxes

User U202 — Bolt Inc

Predicted: 3.1 hours (low engagement predicted)

Top contributing features

Session duration trend (30d)

-54%

32% attribution

Open high-priority tickets

1 unresolved

26% attribution

Features used per session

1.2 avg

20% attribution

Company-wide engagement trend

Declining

13% attribution

Days since last API call

18 days

9% attribution

Feature attributions are computed automatically for every prediction. No separate tooling required. Learn more about Kumo explainability

Frequently asked questions

Common questions about engagement scoring

What is the difference between engagement scoring and churn prediction?

Churn prediction is binary: will they leave or stay? Engagement scoring is continuous: how much will they engage? Engagement scores are more useful for day-to-day decisions because they reveal the gradient between power users and disengaging users. A user predicted to drop from 40 hours to 5 hours per month is a different intervention than one dropping from 5 to 0.

How do you measure engagement for a SaaS product?

The best metric is total session hours over a future window (14-30 days), because it captures both frequency and depth. Counting logins alone is misleading since a user who logs in for 5 seconds is not truly engaged. Session duration weighted by feature breadth gives a more honest picture.

Can engagement scores predict upsell opportunities?

Yes. Users whose predicted engagement is increasing, especially in advanced features, are strong upsell candidates. The engagement score becomes an input to expansion revenue models, helping sales teams prioritize accounts that are growing into their next plan tier.

How often should engagement scores be refreshed?

For most SaaS products, daily scoring is ideal. Engagement can shift quickly after a bad support experience or a product outage. Batch scoring once a week misses these inflection points. Kumo supports continuous scoring as new session data streams in.

Bottom line: A SaaS platform with 200K users that lifts engagement by 10% through targeted interventions unlocks $18M in upsell revenue and reduces churn-driven ARR loss by 30%.

Topics covered

engagement scoring AIuser engagement predictionproduct engagement modelSaaS engagement scoringregression engagement MLgraph neural network engagementKumoRFM engagementrelational deep learninguser retention predictionsession duration forecastingcustomer engagement analytics

One Platform. One Model. Infinite Predictions.

KumoRFM

Relational Foundation Model

Turn structured relational data into predictions in seconds. KumoRFM delivers zero-shot predictions that rival months of traditional data science. No training, feature engineering, or infrastructure required. Just connect your data and start predicting.

For critical use cases, fine-tune KumoRFM on your data using the Kumo platform and Research Agent for 30%+ higher accuracy than traditional models.

Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.