Executive AI Dinner hosted by Kumo - Austin, April 8

Register here
4Regression · Match Quality

Matchmaking Optimization

What match composition maximizes engagement?

Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.

By submitting, you accept the Terms and Privacy Policy.

Loved by data scientists, ML engineers & CXOs at

Catalina Logo

A real-world example

What match composition maximizes engagement?

Poor matchmaking drives 35% of competitive game churn. Players who experience 3+ one-sided matches in a row are 4x more likely to quit that session. A competitive title with 10M MAU where average session time drops 5 minutes due to bad matches loses $28M annually in reduced ad revenue and IAP opportunities. Elo-based systems consider only skill, ignoring play style, social dynamics, and frustration thresholds.

Quick answer

Better matchmaking means optimizing for post-match re-queue rate, not just skill balance. Graph ML connects player skill, play style, social bonds, and frustration signals to learn that play-style diversity and friend presence matter more than tight Elo gaps. Games that optimize matchmaking for engagement see 20%+ increases in average session time.

Approaches compared

4 ways to solve this problem

1. Elo / Glicko rating systems

Assign each player a skill rating based on win/loss history and match players within tight rating bands.

Best for

Fair competitive matches where balanced win rates are the primary goal. Well-understood and battle-tested.

Watch out for

Optimizes for fairness, not engagement. A perfectly balanced match between six aggressive players can be miserable for everyone. Skill is only one dimension of match quality.

2. SBMM with hand-tuned parameters

Skill-based matchmaking with manually tuned weights for skill, latency, party size, and wait time.

Best for

Adds practical constraints (latency, queue time) on top of skill matching. Standard for most competitive titles.

Watch out for

Parameter tuning is slow and subjective. Cannot adapt to new play styles or emerging frustration patterns without manual intervention from the live-ops team.

3. Reinforcement learning on match outcomes

Train an RL agent to optimize match composition based on historical match data and post-match engagement signals.

Best for

Can discover non-obvious match compositions that maximize engagement. Adapts over time as player behavior shifts.

Watch out for

Requires massive volumes of match data and careful reward shaping. Exploration can create bad player experiences during training.

4. KumoRFM (relational graph ML)

Connect players, matches, performance records, and social connections into a graph. Predict post-match re-queue rate for candidate match compositions before they happen.

Best for

Highest predictive accuracy for engagement-optimized matching. Captures play-style compatibility, social bonds, and frustration signals in a single model.

Watch out for

Requires sufficient match history and social graph data. Less effective for brand-new games without behavioral baselines.

Key metric: Games that optimize matchmaking for engagement using graph ML see 20%+ increases in average session time, translating to $28M in additional annual revenue for a 10M MAU title.

Why relational data changes the answer

Match quality depends on the interaction between players, not the properties of each player in isolation. A player's frustration level is shaped by their last five matches (performance table), their play style compatibility with teammates (player graph), and whether they are playing with friends (social graph). Elo systems compress all of this into a single number and lose the dimensional richness that separates a great match from a miserable one.

Relational models read the full context: the player interaction graph reveals that matching a high-aggression player with a defensive teammate yields 2x longer sessions than matching two aggressive players. The social graph shows that friends who queue together re-queue 35% more often. The temporal match history captures frustration buildup: a player on a 6-game losing streak needs different treatment than a player on a 3-game winning streak, even if they have identical ratings. These multi-table, multi-hop signals are invisible to flat rating systems.

Matchmaking on Elo alone is like seating a dinner party by income level. Everyone at the table earns $100K, but you put six accountants together and the conversation dies. A great host considers personality, shared interests, and existing friendships. Graph ML is the host who reads the room instead of reading the spreadsheet.

How KumoRFM solves this

Graph-learned player intelligence across your entire game ecosystem

Kumo models the full player interaction graph: skill ratings, match outcomes, play style embeddings, social connections, and frustration signals. It learns that matching a high-aggression player with a defensive teammate yields 2x longer sessions than matching two aggressive players. The model optimizes for post-match engagement (did the player queue again?) rather than just win-rate balance, capturing the interplay between competition, social bonds, and play style compatibility.

From data to predictions

See the full pipeline in action

Connect your tables, write a PQL query, and get predictions with built-in explainability — all in minutes, not months.

1

Your data

The relational tables Kumo learns from

PLAYERS

player_idskill_ratingplay_stylesessions_7d
PLR3011850Aggressive14
PLR3021820Defensive22
PLR3031890Balanced8

MATCHES

match_idtimestampmodeduration_minavg_rating
M0012025-03-02 20:15Ranked281840
M0022025-03-02 21:00Ranked121860

PERFORMANCE

perf_idmatch_idplayer_idkillsdeathsplayed_again
PF01M001PLR301158Y
PF02M001PLR30243Y
PF03M002PLR303222N

SOCIAL_GRAPH

edge_idplayer_aplayer_btypegames_together
SG01PLR301PLR302Friend45
SG02PLR302PLR303Clan12
2

Write your PQL query

Describe what to predict in 2–3 lines — Kumo handles the rest

PQL
PREDICT AVG(PERFORMANCE.PLAYED_AGAIN, 0, 1, days)
FOR EACH MATCHES.MATCH_ID
-- Optimize for re-queue rate, not just win balance
3

Prediction output

Every entity gets a score, updated continuously

MATCH_CONFIGAVG_RATING_DIFFSTYLE_DIVERSITYPREDICTED_REQUEUE_RATE
Config A30High0.78
Config B15Low0.52
Config C25Medium0.71
4

Understand why

Every prediction includes feature attributions — no black boxes

Match Config B -- Low style diversity, tight rating

Predicted: 52% predicted re-queue rate

Top contributing features

Play style homogeneity

All aggressive

32% attribution

Recent frustration index (team avg)

0.7 (high)

24% attribution

Social connection density

0 friends in match

18% attribution

Session depth (matches played tonight)

5th match

14% attribution

Historical stomp rate for config

38%

12% attribution

Feature attributions are computed automatically for every prediction. No separate tooling required. Learn more about Kumo explainability

Frequently asked questions

Common questions about matchmaking optimization

How does ML improve game matchmaking?

ML matchmaking optimizes for player engagement (re-queue rate, session length) rather than just win-rate balance. By analyzing play style compatibility, social connections, and frustration levels across the player network, graph ML finds match compositions that keep players playing. The result is 20%+ longer sessions compared to pure skill-based matching.

What is wrong with Elo-based matchmaking?

Nothing is wrong with Elo for competitive fairness. But fairness and fun are different objectives. Elo ignores play-style diversity, social bonds, and frustration accumulation. Two players with identical Elo ratings can have completely different engagement outcomes depending on who they are matched with and how their last few matches went.

What data do you need for ML-based matchmaking?

At minimum: player profiles with skill ratings, match history with outcomes, and post-match engagement signals (did the player queue again?). For best results, add play-style classifications, social connections (friends, clan members), and per-match performance data. The social graph is especially high-value for engagement optimization.

How do you measure matchmaking quality?

Re-queue rate is the gold standard: did the player choose to play another match? Session length and matches-per-session are secondary metrics. Win-rate balance matters for competitive integrity but is a constraint, not the objective. The best matchmaking systems maximize engagement while keeping win rates within acceptable bounds.

Can ML matchmaking reduce player churn?

Yes. Poor matchmaking drives 35% of competitive game churn. Players who experience 3+ one-sided matches in a row are 4x more likely to quit that session. ML matchmaking that accounts for frustration buildup and play-style compatibility reduces these rage-quit sequences by 40-50%, directly improving retention.

Bottom line: A competitive game with 10M MAU that increases average session time by 5 minutes through better matchmaking generates $28M in additional annual revenue. Kumo optimizes for post-match re-queue rate using play style, social bonds, and frustration signals that Elo alone cannot capture.

Topics covered

matchmaking optimization AIgame matchmaking MLplayer engagement predictionskill-based matchmakingmatch quality modelgraph neural network matchmakingKumoRFM matchmakingSBMM optimizationcompetitive game balance AI

One Platform. One Model. Infinite Predictions.

KumoRFM

Relational Foundation Model

Turn structured relational data into predictions in seconds. KumoRFM delivers zero-shot predictions that rival months of traditional data science. No training, feature engineering, or infrastructure required. Just connect your data and start predicting.

For critical use cases, fine-tune KumoRFM on your data using the Kumo platform and Research Agent for 30%+ higher accuracy than traditional models.

Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.