Matchmaking Optimization
“What match composition maximizes engagement?”
Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.
By submitting, you accept the Terms and Privacy Policy.

Loved by data scientists, ML engineers & CXOs at

A real-world example
What match composition maximizes engagement?
Poor matchmaking drives 35% of competitive game churn. Players who experience 3+ one-sided matches in a row are 4x more likely to quit that session. A competitive title with 10M MAU where average session time drops 5 minutes due to bad matches loses $28M annually in reduced ad revenue and IAP opportunities. Elo-based systems consider only skill, ignoring play style, social dynamics, and frustration thresholds.
Quick answer
Better matchmaking means optimizing for post-match re-queue rate, not just skill balance. Graph ML connects player skill, play style, social bonds, and frustration signals to learn that play-style diversity and friend presence matter more than tight Elo gaps. Games that optimize matchmaking for engagement see 20%+ increases in average session time.
Approaches compared
4 ways to solve this problem
1. Elo / Glicko rating systems
Assign each player a skill rating based on win/loss history and match players within tight rating bands.
Best for
Fair competitive matches where balanced win rates are the primary goal. Well-understood and battle-tested.
Watch out for
Optimizes for fairness, not engagement. A perfectly balanced match between six aggressive players can be miserable for everyone. Skill is only one dimension of match quality.
2. SBMM with hand-tuned parameters
Skill-based matchmaking with manually tuned weights for skill, latency, party size, and wait time.
Best for
Adds practical constraints (latency, queue time) on top of skill matching. Standard for most competitive titles.
Watch out for
Parameter tuning is slow and subjective. Cannot adapt to new play styles or emerging frustration patterns without manual intervention from the live-ops team.
3. Reinforcement learning on match outcomes
Train an RL agent to optimize match composition based on historical match data and post-match engagement signals.
Best for
Can discover non-obvious match compositions that maximize engagement. Adapts over time as player behavior shifts.
Watch out for
Requires massive volumes of match data and careful reward shaping. Exploration can create bad player experiences during training.
4. KumoRFM (relational graph ML)
Connect players, matches, performance records, and social connections into a graph. Predict post-match re-queue rate for candidate match compositions before they happen.
Best for
Highest predictive accuracy for engagement-optimized matching. Captures play-style compatibility, social bonds, and frustration signals in a single model.
Watch out for
Requires sufficient match history and social graph data. Less effective for brand-new games without behavioral baselines.
Key metric: Games that optimize matchmaking for engagement using graph ML see 20%+ increases in average session time, translating to $28M in additional annual revenue for a 10M MAU title.
Why relational data changes the answer
Match quality depends on the interaction between players, not the properties of each player in isolation. A player's frustration level is shaped by their last five matches (performance table), their play style compatibility with teammates (player graph), and whether they are playing with friends (social graph). Elo systems compress all of this into a single number and lose the dimensional richness that separates a great match from a miserable one.
Relational models read the full context: the player interaction graph reveals that matching a high-aggression player with a defensive teammate yields 2x longer sessions than matching two aggressive players. The social graph shows that friends who queue together re-queue 35% more often. The temporal match history captures frustration buildup: a player on a 6-game losing streak needs different treatment than a player on a 3-game winning streak, even if they have identical ratings. These multi-table, multi-hop signals are invisible to flat rating systems.
Matchmaking on Elo alone is like seating a dinner party by income level. Everyone at the table earns $100K, but you put six accountants together and the conversation dies. A great host considers personality, shared interests, and existing friendships. Graph ML is the host who reads the room instead of reading the spreadsheet.
How KumoRFM solves this
Graph-learned player intelligence across your entire game ecosystem
Kumo models the full player interaction graph: skill ratings, match outcomes, play style embeddings, social connections, and frustration signals. It learns that matching a high-aggression player with a defensive teammate yields 2x longer sessions than matching two aggressive players. The model optimizes for post-match engagement (did the player queue again?) rather than just win-rate balance, capturing the interplay between competition, social bonds, and play style compatibility.
From data to predictions
See the full pipeline in action
Connect your tables, write a PQL query, and get predictions with built-in explainability — all in minutes, not months.
Your data
The relational tables Kumo learns from
PLAYERS
| player_id | skill_rating | play_style | sessions_7d |
|---|---|---|---|
| PLR301 | 1850 | Aggressive | 14 |
| PLR302 | 1820 | Defensive | 22 |
| PLR303 | 1890 | Balanced | 8 |
MATCHES
| match_id | timestamp | mode | duration_min | avg_rating |
|---|---|---|---|---|
| M001 | 2025-03-02 20:15 | Ranked | 28 | 1840 |
| M002 | 2025-03-02 21:00 | Ranked | 12 | 1860 |
PERFORMANCE
| perf_id | match_id | player_id | kills | deaths | played_again |
|---|---|---|---|---|---|
| PF01 | M001 | PLR301 | 15 | 8 | Y |
| PF02 | M001 | PLR302 | 4 | 3 | Y |
| PF03 | M002 | PLR303 | 22 | 2 | N |
SOCIAL_GRAPH
| edge_id | player_a | player_b | type | games_together |
|---|---|---|---|---|
| SG01 | PLR301 | PLR302 | Friend | 45 |
| SG02 | PLR302 | PLR303 | Clan | 12 |
Write your PQL query
Describe what to predict in 2–3 lines — Kumo handles the rest
PREDICT AVG(PERFORMANCE.PLAYED_AGAIN, 0, 1, days) FOR EACH MATCHES.MATCH_ID -- Optimize for re-queue rate, not just win balance
Prediction output
Every entity gets a score, updated continuously
| MATCH_CONFIG | AVG_RATING_DIFF | STYLE_DIVERSITY | PREDICTED_REQUEUE_RATE |
|---|---|---|---|
| Config A | 30 | High | 0.78 |
| Config B | 15 | Low | 0.52 |
| Config C | 25 | Medium | 0.71 |
Understand why
Every prediction includes feature attributions — no black boxes
Match Config B -- Low style diversity, tight rating
Predicted: 52% predicted re-queue rate
Top contributing features
Play style homogeneity
All aggressive
32% attribution
Recent frustration index (team avg)
0.7 (high)
24% attribution
Social connection density
0 friends in match
18% attribution
Session depth (matches played tonight)
5th match
14% attribution
Historical stomp rate for config
38%
12% attribution
Feature attributions are computed automatically for every prediction. No separate tooling required. Learn more about Kumo explainability
PQL Documentation
Learn the Predictive Query Language — SQL-like syntax for defining any prediction task in 2–3 lines.
Python SDK
Integrate Kumo predictions into your pipelines. Train, evaluate, and deploy models programmatically.
Explainability Docs
Understand feature attributions, model evaluation metrics, and how to build trust with stakeholders.
Frequently asked questions
Common questions about matchmaking optimization
How does ML improve game matchmaking?
ML matchmaking optimizes for player engagement (re-queue rate, session length) rather than just win-rate balance. By analyzing play style compatibility, social connections, and frustration levels across the player network, graph ML finds match compositions that keep players playing. The result is 20%+ longer sessions compared to pure skill-based matching.
What is wrong with Elo-based matchmaking?
Nothing is wrong with Elo for competitive fairness. But fairness and fun are different objectives. Elo ignores play-style diversity, social bonds, and frustration accumulation. Two players with identical Elo ratings can have completely different engagement outcomes depending on who they are matched with and how their last few matches went.
What data do you need for ML-based matchmaking?
At minimum: player profiles with skill ratings, match history with outcomes, and post-match engagement signals (did the player queue again?). For best results, add play-style classifications, social connections (friends, clan members), and per-match performance data. The social graph is especially high-value for engagement optimization.
How do you measure matchmaking quality?
Re-queue rate is the gold standard: did the player choose to play another match? Session length and matches-per-session are secondary metrics. Win-rate balance matters for competitive integrity but is a constraint, not the objective. The best matchmaking systems maximize engagement while keeping win rates within acceptable bounds.
Can ML matchmaking reduce player churn?
Yes. Poor matchmaking drives 35% of competitive game churn. Players who experience 3+ one-sided matches in a row are 4x more likely to quit that session. ML matchmaking that accounts for frustration buildup and play-style compatibility reduces these rage-quit sequences by 40-50%, directly improving retention.
Bottom line: A competitive game with 10M MAU that increases average session time by 5 minutes through better matchmaking generates $28M in additional annual revenue. Kumo optimizes for post-match re-queue rate using play style, social bonds, and frustration signals that Elo alone cannot capture.
Related use cases
Explore more gaming use cases
Topics covered
One Platform. One Model. Infinite Predictions.
KumoRFM
Relational Foundation Model
Turn structured relational data into predictions in seconds. KumoRFM delivers zero-shot predictions that rival months of traditional data science. No training, feature engineering, or infrastructure required. Just connect your data and start predicting.
For critical use cases, fine-tune KumoRFM on your data using the Kumo platform and Research Agent for 30%+ higher accuracy than traditional models.
Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.




