Customer Health Scoring
“For each account, what will their composite health score be over the next 30 days?”
Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.
By submitting, you accept the Terms and Privacy Policy.

Loved by data scientists, ML engineers & CXOs at

A real-world example
For each account, what will their composite health score be over the next 30 days?
Manual health scores built from weighted rules (usage > X = green, tickets > Y = red) miss 60% of accounts that churn. The rules are static, the weights are guessed, and they cannot capture the compound interactions between usage decline, support escalation, and billing friction. For a B2B SaaS with 5,000 accounts and $120K average ACV, a 10% improvement in health score accuracy saves $12M in preventable churn.
Quick answer
Customer health scoring predicts a continuous composite health score for each account by learning from product usage, support interactions, billing patterns, and cross-account trends. Unlike rule-based health scores with guessed weights, AI-driven scores continuously reweight signal importance and capture compound interactions that static formulas miss.
Approaches compared
4 ways to solve this problem
1. Weighted Rule-Based Scores
Assign points for usage, support, and billing metrics using manually chosen weights. Sum into a 0-100 score. The default in most customer success platforms (Gainsight, Totango, ChurnZero).
Best for
Initial health scoring when you need something fast and explainable. Good for companies just starting their CS function.
Watch out for
Weights are guessed, not learned. The rules miss 60% of accounts that actually churn because they cannot capture interaction effects. A score that combines 'usage: green' and 'support: yellow' into 'overall: green' misses that the combination of declining usage plus escalating tickets is a strong churn signal.
2. Logistic Regression / Simple ML
Train a model on pre-aggregated account-level features to predict a health-correlated outcome (renewal, expansion, churn). Use the predicted probability as the health score.
Best for
Teams that want a data-driven score without complex ML infrastructure. Interpretable coefficients make it easy to explain to CSMs.
Watch out for
Requires manual feature aggregation (average usage over 30 days, ticket count, etc.). Linear models cannot capture non-linear interactions between features. The model treats each account independently.
3. Ensemble ML (XGBoost / Random Forest)
Train a gradient boosted model on dozens of hand-engineered account features. Captures non-linear relationships and feature interactions within a single table.
Best for
Teams with strong feature engineering capability and a well-maintained feature store.
Watch out for
Still operates on a flat table. Cannot see cross-account signals like 'all accounts managed by this CSM are declining' or 'accounts in this industry vertical have deteriorating health.' Feature engineering is a 4-8 week bottleneck.
4. KumoRFM (Graph Neural Networks on Relational Data)
Connects accounts, health signals, support tickets, billing events, and CSM portfolios into a relational graph. Predicts future health score as a regression, continuously learning signal importance from the full account ecosystem.
Best for
B2B SaaS companies where account health depends on multi-table signals and cross-account patterns (industry trends, CSM portfolio effects, partner health).
Watch out for
The graph advantage is strongest when cross-account relationships exist in the data. If every account is truly independent with no industry, CSM, or partner connections, the benefit over ensemble ML is smaller.
Key metric: Rule-based health scores miss 60% of accounts that churn. Graph-based models on RelBench score 76.71 vs 62.44 for flat tables, a 23% improvement that directly reduces preventable churn.
Why relational data changes the answer
Account A602 (Orbit Tech) has a health score of 54 from its latest product usage signal. A rule-based system might classify this as 'yellow' and move on. But the relational graph tells a much richer story: there is a critical support ticket that has been open for 8 days with no resolution. Product usage has dropped 38% in the last 30 days. Only 4 of 15 licensed seats are active. The CSM managing this account also manages 2 other accounts that are declining. And the billing team logged a 12-day average payment delay trend.
Each of these signals exists in a different table. The support ticket is in the SUPPORT_TICKETS table. Seat utilization requires joining ACCOUNTS to USERS. The CSM portfolio effect requires looking across accounts grouped by csm_id. Billing delays live in a separate BILLING_EVENTS table. A flat model requires someone to pre-compute all of these cross-table aggregations as features. A graph neural network traverses these relationships automatically, discovering which combinations of signals predict health trajectory changes. On the RelBench benchmark, multi-table graph models score 76.71 vs 62.44 for flat-table approaches. For customer health scoring, where the signal is inherently multi-dimensional and cross-table, this gap is even larger in practice.
Rule-based health scores are like checking a patient's blood pressure and declaring them healthy. A relational health model is like a full diagnostic workup: blood pressure, cholesterol trends, family history, medication adherence, specialist referral patterns, and the outcomes of similar patients at the same clinic. The single metric can be misleading; the full relational picture reveals the true state.
How KumoRFM solves this
Relational intelligence for customer retention
Kumo predicts a continuous health score — the average of future health signals — by learning the compound relationships between product usage depth, support ticket patterns, billing events, and how health trends propagate across accounts in the same industry or CSM portfolio. The model continuously reweights signal importance rather than relying on static rules.
From data to predictions
See the full pipeline in action
Connect your tables, write a PQL query, and get predictions with built-in explainability — all in minutes, not months.
Your data
The relational tables Kumo learns from
ACCOUNTS
| account_id | company | plan | mrr | csm_id |
|---|---|---|---|---|
| A601 | Nexus Corp | Enterprise | $28,000 | CSM-12 |
| A602 | Orbit Tech | Growth | $8,500 | CSM-07 |
| A603 | Pinnacle Ltd | Enterprise | $45,000 | CSM-12 |
HEALTH_SIGNALS
| signal_id | account_id | score | signal_type | timestamp |
|---|---|---|---|---|
| HS801 | A601 | 82 | Product Usage | 2025-02-28 |
| HS802 | A602 | 54 | Support Health | 2025-03-01 |
| HS803 | A603 | 91 | Product Usage | 2025-03-02 |
SUPPORT_TICKETS
| ticket_id | account_id | priority | resolution_hours | timestamp |
|---|---|---|---|---|
| T901 | A601 | Medium | 4.2 | 2025-02-20 |
| T902 | A602 | Critical | 48.0 | 2025-02-25 |
| T903 | A603 | Low | 1.5 | 2025-01-15 |
Write your PQL query
Describe what to predict in 2–3 lines — Kumo handles the rest
PREDICT AVG(HEALTH_SIGNALS.SCORE, 0, 30, days) FOR EACH ACCOUNTS.ACCOUNT_ID
Prediction output
Every entity gets a score, updated continuously
| ACCOUNT_ID | TIMESTAMP | TARGET_PRED |
|---|---|---|
| A601 | 2025-03-05 | 78.3 |
| A602 | 2025-03-05 | 41.7 |
| A603 | 2025-03-05 | 89.1 |
Understand why
Every prediction includes feature attributions — no black boxes
Account A602 — Orbit Tech
Predicted: 41.7 (declining health score)
Top contributing features
Unresolved critical ticket age
8 days
33% attribution
Product usage trend (30d)
-38%
25% attribution
Active users vs licensed seats
4 of 15
19% attribution
CSM portfolio health trend
3 accounts declining
13% attribution
Billing payment delay trend
+12 days avg
10% attribution
Feature attributions are computed automatically for every prediction. No separate tooling required. Learn more about Kumo explainability
PQL Documentation
Learn the Predictive Query Language — SQL-like syntax for defining any prediction task in 2–3 lines.
Python SDK
Integrate Kumo predictions into your pipelines. Train, evaluate, and deploy models programmatically.
Explainability Docs
Understand feature attributions, model evaluation metrics, and how to build trust with stakeholders.
Frequently asked questions
Common questions about customer health scoring
What is the difference between a customer health score and a churn prediction?
A health score is a continuous measure of account well-being (0-100) that captures the full spectrum. Churn prediction is a binary outcome (will churn / will not churn). Health scores are more useful for day-to-day CS operations because they guide prioritization and intervention type, not just a yes/no alarm.
How accurate are rule-based customer health scores?
Research shows rule-based health scores miss 60% of accounts that actually churn. The core problem is static weights: usage is weighted at 40%, support at 30%, billing at 30% regardless of context. In reality, signal importance changes constantly, and the interactions between signals matter more than the individual values.
How often should customer health scores be updated?
Daily updates are the minimum for B2B SaaS. A critical support ticket filed on Monday should change the health score on Monday, not wait until the next weekly batch. Kumo supports continuous scoring that updates as new signals arrive from any connected table.
Can health scores predict expansion revenue?
Yes. Accounts with rising health scores driven by increasing seat adoption and feature usage breadth are the strongest expansion candidates. The health score becomes a bidirectional signal: declining scores trigger retention plays, rising scores trigger expansion outreach.
What data sources should feed into a customer health score?
At minimum: product usage events, support tickets, and billing/payment data. High-value additions include NPS/CSAT survey responses, login frequency by user role (especially executive sponsors and champions), CSM activity logs, and integration/API usage. The more relational tables you connect, the more compound signals the graph can discover.
Bottom line: A B2B SaaS with 5,000 accounts and $120K average ACV that improves health score accuracy by 10% saves $12M in preventable churn — replacing guesswork rules with learned, continuously updated account intelligence.
Related use cases
Explore more retention use cases
Topics covered
One Platform. One Model. Infinite Predictions.
KumoRFM
Relational Foundation Model
Turn structured relational data into predictions in seconds. KumoRFM delivers zero-shot predictions that rival months of traditional data science. No training, feature engineering, or infrastructure required. Just connect your data and start predicting.
For critical use cases, fine-tune KumoRFM on your data using the Kumo platform and Research Agent for 30%+ higher accuracy than traditional models.
Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.




