Trial-to-Paid Conversion
“Which free trial users will upgrade to a paid plan in the next 14 days?”
Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.
By submitting, you accept the Terms and Privacy Policy.

Loved by data scientists, ML engineers & CXOs at

A real-world example
Which free trial users will upgrade to a paid plan in the next 14 days?
SaaS companies with free trials convert only 5-15% of trial users to paid plans. Product and growth teams lack visibility into which trial users are likely to convert, leading to generic onboarding sequences that fail to activate high-potential users. Meanwhile, power users who would convert with a timely nudge churn silently at the end of their trial. The difference between a 10% and 15% trial conversion rate can mean tens of millions in ARR.
Quick answer
Trial-to-paid conversion prediction identifies which free-tier users will upgrade to a paid plan within a defined time window (typically 14 days). The best models learn from feature adoption sequences, collaboration behavior, and cross-user patterns rather than simple usage counts. Targeted intervention for high-probability converters lifts trial-to-paid rates by 45%.
Approaches compared
4 ways to solve this problem
1. Time-Based Drip Campaigns
Send the same onboarding email sequence to all trial users on a fixed schedule. Day 1: welcome. Day 3: key features. Day 7: upgrade prompt. The default for most SaaS companies.
Best for
Early-stage products with small user bases where personalization overhead is not justified.
Watch out for
One-size-fits-all. A power user who adopted 5 features in Day 1 gets the same Day 3 email as someone who logged in once and never returned. No prioritization, no personalization, no early identification of high-potential users.
2. Product Qualified Lead (PQL) Rules
Define activation criteria: 'used feature X, invited a teammate, logged in 3+ times.' Flag users who meet the criteria as PQLs for sales or growth team follow-up.
Best for
Teams that have identified clear activation milestones through product analytics. A step up from time-based campaigns.
Watch out for
Rules are static and binary. A user who used 4 of 5 activation criteria is not a PQL while one who used all 5 is. The model cannot weight feature importance or capture the sequence in which features were adopted. Activation criteria are usually defined by intuition, not data.
3. Binary Classification (XGBoost/Random Forest)
Train a model on features like login count, feature adoption breadth, session duration, and company size. Predict conversion probability per trial user.
Best for
Teams with ML infrastructure and clean product usage data. Good accuracy when the right features are identified.
Watch out for
Feature engineering is the bottleneck. You need to manually define 'collaboration feature usage count,' 'export feature adoption,' and 'days to first API call' as separate features. The model treats each user independently, missing signals like 'users at companies this size convert 6x more when they use the collaboration feature.'
4. KumoRFM (Graph Neural Networks on Relational Data)
Connects users, feature usage events, and subscriptions into a relational graph. Learns conversion patterns from feature adoption sequences, collaboration signals, and cross-user patterns at similar companies. Zero feature engineering required.
Best for
PLG SaaS companies where feature adoption sequences and team-level behavior drive conversion decisions.
Watch out for
Requires event-level feature usage data with timestamps. If your product only logs aggregate session counts without feature-level tracking, instrument your product first.
Key metric: Targeted intervention for predicted converters lifts trial-to-paid rates by 45%. SAP SALT benchmark: 91% accuracy for multi-table relational models vs 75% for single-table approaches.
Why relational data changes the answer
User U201 is a free-tier user at a 50-person company who used the dashboard 12 times, the collaboration feature 8 times, and the export feature 3 times in the first 5 days. A flat model sees these as three features: dashboard_count=12, collaboration_count=8, export_count=3. But the relational graph captures the sequence: collaboration was adopted on Day 3, which is the exact pattern that precedes conversion at 8x the base rate. The export feature (a premium feature that free users can try once) was used 3 times, signaling that the user hit the paywall and kept trying.
More importantly, the graph connects U201 to other users at similar-sized companies. Users at 50-person companies who used the collaboration feature 5+ times in the first week convert at 6x the base rate. This cross-user, cross-company pattern is invisible to models that score each user independently. The GNN propagates conversion signals from similar users to U201, amplifying the prediction confidence. On the SAP SALT benchmark, multi-table relational models achieve 91% accuracy vs 75% for single-table models. For trial conversion specifically, the feature adoption sequence and cross-user patterns at similar company sizes are often more predictive than any individual usage metric.
Predicting trial conversion from usage counts is like predicting which gym trial members will buy a membership based on how many times they visited. A relational model also sees which classes they took (feature adoption), whether they came with friends (collaboration), whether they tried the sauna that requires membership (premium feature paywall), and that trial members at this location who do all three convert 8x more often. The visit count is a weak signal; the behavioral pattern is the strong one.
How KumoRFM solves this
Relational intelligence for smarter acquisition
Kumo connects USERS, FEATURE_USAGE, and SUBSCRIPTIONS into a relational graph that captures the full onboarding journey. The GNN learns conversion patterns across feature adoption sequences — like 'free users who used the collaboration feature 5+ times and invited a teammate within the first 3 days convert at 8x the base rate.' The WHERE clause filters to active free-tier users, and the model predicts conversion probability with 14-day lookahead, giving growth teams time to intervene with targeted offers or onboarding nudges.
From data to predictions
See the full pipeline in action
Connect your tables, write a PQL query, and get predictions with built-in explainability — all in minutes, not months.
Your data
The relational tables Kumo learns from
USERS
| user_id | plan_type | signup_date | company_size |
|---|---|---|---|
| U201 | free | 2025-10-20 | 50 |
| U202 | free | 2025-10-22 | 200 |
| U203 | free | 2025-10-25 | 15 |
| U204 | free | 2025-10-28 | 500 |
FEATURE_USAGE
| usage_id | user_id | feature | count | timestamp |
|---|---|---|---|---|
| FU01 | U201 | dashboard | 12 | 2025-10-21 |
| FU02 | U201 | collaboration | 8 | 2025-10-23 |
| FU03 | U201 | export | 3 | 2025-10-25 |
| FU04 | U202 | dashboard | 2 | 2025-10-23 |
| FU05 | U203 | dashboard | 18 | 2025-10-26 |
| FU06 | U203 | api_access | 6 | 2025-10-28 |
| FU07 | U204 | dashboard | 1 | 2025-10-29 |
SUBSCRIPTIONS
| sub_id | user_id | plan_type | amount | timestamp |
|---|---|---|---|---|
| S01 | U201 | pro | $99/mo | 2025-11-03 |
| S02 | U203 | team | $249/mo | 2025-11-08 |
Write your PQL query
Describe what to predict in 2–3 lines — Kumo handles the rest
PREDICT COUNT(SUBSCRIPTIONS.* WHERE SUBSCRIPTIONS.PLAN_TYPE != 'free', 0, 14, days) > 0 FOR EACH USERS.USER_ID WHERE USERS.PLAN_TYPE = 'free'
Prediction output
Every entity gets a score, updated continuously
| USER_ID | TIMESTAMP | TARGET_PRED | True_PROB |
|---|---|---|---|
| U201 | 2025-10-20 | True | 0.88 |
| U202 | 2025-10-22 | False | 0.14 |
| U203 | 2025-10-25 | True | 0.82 |
| U204 | 2025-10-28 | False | 0.06 |
Understand why
Every prediction includes feature attributions — no black boxes
User U201 — free plan, company size 50
Predicted: True (88% probability)
Top contributing features
Used collaboration feature 8 times in first 3 days
8 uses
30% attribution
Used export feature (premium activation signal)
3 uses
25% attribution
12 dashboard sessions (high engagement)
12 sessions
20% attribution
Company size 50 (team plan sweet spot)
50 employees
16% attribution
Similar users at same company size converted 6x more
6x base rate
9% attribution
Feature attributions are computed automatically for every prediction. No separate tooling required. Learn more about Kumo explainability
PQL Documentation
Learn the Predictive Query Language — SQL-like syntax for defining any prediction task in 2–3 lines.
Python SDK
Integrate Kumo predictions into your pipelines. Train, evaluate, and deploy models programmatically.
Explainability Docs
Understand feature attributions, model evaluation metrics, and how to build trust with stakeholders.
Frequently asked questions
Common questions about trial-to-paid conversion
What is a good trial-to-paid conversion rate?
Industry benchmarks for SaaS free trials range from 5-15% without AI intervention. Companies using predictive trial scoring and targeted nudges typically achieve 15-25%. The top quartile of PLG companies with graph-based scoring reach 25-35%. The key is not the overall rate but the lift from targeted intervention on high-probability converters.
Which features predict trial conversion the most?
Collaboration features (inviting teammates, sharing work) and premium-tier features (that free users can sample but not fully use) are the strongest universal predictors. However, the specific features vary by product. Graph models discover the most predictive features automatically from the usage data, without requiring product teams to guess which features matter.
When in the trial period should you intervene?
The first 3 days are critical. Users who have not adopted a key activation feature by Day 3 rarely convert without intervention. The graph model identifies at-risk high-potential users early enough to trigger targeted onboarding nudges, in-app guidance, or sales outreach before the engagement window closes.
How does company size affect trial conversion?
Company size interacts with feature adoption in non-obvious ways. A solo user at a 500-person company who only used the dashboard has a different conversion probability than a solo user at a 5-person company with the same behavior. Graph models capture these interaction effects by learning from cross-user patterns at similar company sizes.
Bottom line: Targeted intervention for trial users predicted to convert lifts trial-to-paid rates by 45%, while identifying at-risk high-potential users early enough to save them — translating to millions in incremental ARR for product-led growth companies.
Related use cases
Explore more acquisition use cases
Topics covered
One Platform. One Model. Infinite Predictions.
KumoRFM
Relational Foundation Model
Turn structured relational data into predictions in seconds. KumoRFM delivers zero-shot predictions that rival months of traditional data science. No training, feature engineering, or infrastructure required. Just connect your data and start predicting.
For critical use cases, fine-tune KumoRFM on your data using the Kumo platform and Research Agent for 30%+ higher accuracy than traditional models.
Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.




