No-Show Prediction
“Which patients will miss their appointment?”
Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.
By submitting, you accept the Terms and Privacy Policy.

Loved by data scientists, ML engineers & CXOs at

A real-world example
Which patients will miss their appointment?
Patient no-shows cost the U.S. healthcare system $150B annually. A mid-size health system with 200 providers loses $3.2M per year to empty appointment slots. Generic reminder systems treat every patient the same. The real signal is in the relationships: which provider, which day, which referral chain, which past visit patterns predict who will actually show up.
Quick answer
AI predicts patient no-shows by connecting appointment details, visit history, provider schedules, and patient demographics into a relational graph. Unlike generic reminder systems that treat every patient the same, graph-based models learn that specific patient-provider-day combinations predict no-shows at 3x the average rate. A 200-provider health system recovering 20% of no-show slots saves $3.2M annually through predictive overbooking and targeted interventions.
Approaches compared
4 ways to solve this problem
1. Universal Reminder Systems
Send text/email/phone reminders to all patients 24-48 hours before their appointment. The baseline approach at most health systems. Reduces no-shows by 5-10% on average.
Best for
Low-cost, universal coverage. Every patient gets at least one touchpoint. Works for patients who simply forget.
Watch out for
Does not address the root causes of no-shows: transportation barriers, scheduling conflicts, care avoidance, or provider dissatisfaction. Treats a forgetful retiree and a transportation-challenged Medicaid patient the same way.
2. Historical No-Show Rate Thresholds
Flag patients whose personal no-show rate exceeds a threshold (e.g., 30%+). Apply overbooking rules based on the patient's historical rate.
Best for
Identifying chronic no-show patients for double-booking or waitlist management.
Watch out for
Penalizes patients for past behavior without understanding context. A patient with 3 past no-shows might have had a transportation issue that is now resolved. Also misses first-time patients entirely since they have no history.
3. Logistic Regression on Patient Features
Predict no-show probability using patient demographics, insurance type, distance to clinic, and appointment characteristics. Trained on historical appointment data.
Best for
Moderate accuracy improvement over simple rate thresholds with interpretable risk factors.
Watch out for
Cannot capture the interaction between patient and provider. A patient who no-shows for specialist referrals but always attends primary care has a context-dependent pattern that flat models average out.
4. Graph Neural Networks (Kumo's Approach)
Connects patients, appointments, visit history, providers, and schedule context into a relational graph. Learns that specific patient-provider-day-type combinations predict no-shows differently than any single factor suggests.
Best for
Capturing interaction effects: patients referred by certain providers to specific specialists on Monday mornings have 3x higher no-show rates. Household effects: patients in the same household miss together.
Watch out for
Requires connected scheduling, visit history, and provider data. If appointment and visit history systems are separate, integration is a prerequisite.
Key metric: Patient no-shows cost the US healthcare system $150B annually. Graph-based prediction recovers 20% of no-show slots, saving $3.2M per year for a 200-provider health system.
Why relational data changes the answer
Flat no-show models see each appointment as an independent row: patient age, insurance type, appointment type, days since booking. They can predict that Medicaid patients booked 28 days in advance for follow-up visits have higher no-show rates. But they cannot see that this specific patient has a 60% no-show rate specifically with Dr. Kim on Mondays (but 0% no-show rate with Dr. Rajan on any day), that appointments booked more than 21 days out for this provider's specialty have 2x the no-show rate, and that the patient's household member also no-showed on the same day last time (suggesting a shared transportation barrier). These are interaction patterns between patients, providers, and schedules that only emerge when the data is connected.
Relational learning maps these interactions directly. The model walks from the appointment to the patient's full visit history (including which providers they kept vs. missed), to the provider's schedule patterns (which days and times have high no-show rates), to the patient's household (do family members show similar patterns). It learns that the combination of long booking lead time + Monday morning + specialist referral + prior no-show with same provider predicts a 71% no-show probability, while any single factor alone would predict only 25-30%. This precision enables smart overbooking for high-risk slots and targeted outreach (ride assistance, schedule change offers) for at-risk patients.
Predicting no-shows from a flat appointment table is like predicting restaurant cancellations by looking only at the reservation time and party size. You miss that this customer cancels every time it rains, books at this restaurant specifically when their first-choice place is full, and last time brought a guest who left a bad review. The no-show is about the relationship between the customer, the restaurant, and the circumstances, not just the reservation details.
How KumoRFM solves this
Graph-learned clinical intelligence across your entire patient network
Kumo connects patients, appointments, visit history, and providers into a single relational graph. It learns that patients referred by certain providers to specific specialists on Monday mornings have 3x higher no-show rates. The model captures temporal patterns (seasonal trends, day-of-week effects) and social patterns (patients in the same household missing together) that rule-based systems cannot detect.
From data to predictions
See the full pipeline in action
Connect your tables, write a PQL query, and get predictions with built-in explainability — all in minutes, not months.
Your data
The relational tables Kumo learns from
PATIENTS
| patient_id | age | zip_code | insurance |
|---|---|---|---|
| P2001 | 34 | 10001 | Medicaid |
| P2002 | 67 | 10025 | Medicare |
| P2003 | 45 | 10013 | Commercial |
APPOINTMENTS
| appt_id | patient_id | provider_id | scheduled_date | type |
|---|---|---|---|---|
| A001 | P2001 | DR101 | 2025-03-15 | Follow-up |
| A002 | P2002 | DR205 | 2025-03-16 | New patient |
| A003 | P2003 | DR101 | 2025-03-15 | Annual exam |
VISIT_HISTORY
| visit_id | patient_id | date | status | provider_id |
|---|---|---|---|---|
| V001 | P2001 | 2025-01-10 | No-show | DR101 |
| V002 | P2002 | 2025-02-05 | Completed | DR205 |
| V003 | P2003 | 2025-02-20 | Completed | DR101 |
PROVIDERS
| provider_id | name | specialty | location |
|---|---|---|---|
| DR101 | Dr. Kim | Internal Medicine | Main Campus |
| DR205 | Dr. Rajan | Cardiology | West Wing |
Write your PQL query
Describe what to predict in 2–3 lines — Kumo handles the rest
PREDICT BOOL(APPOINTMENTS.STATUS = 'No-show', 0, 1, days) FOR EACH APPOINTMENTS.APPT_ID WHERE APPOINTMENTS.SCHEDULED_DATE >= '2025-03-15'
Prediction output
Every entity gets a score, updated continuously
| APPT_ID | PATIENT_ID | SCHEDULED_DATE | NO_SHOW_PROB |
|---|---|---|---|
| A001 | P2001 | 2025-03-15 | 0.71 |
| A002 | P2002 | 2025-03-16 | 0.12 |
| A003 | P2003 | 2025-03-15 | 0.08 |
Understand why
Every prediction includes feature attributions — no black boxes
Appointment A001 -- Patient P2001, Follow-up with Dr. Kim
Predicted: 71% no-show probability
Top contributing features
Past no-show rate (last 12 months)
3 of 5 appts
35% attribution
Days since appointment was booked
28 days
22% attribution
Provider's Monday no-show rate
18%
17% attribution
Distance from patient ZIP to clinic
8.2 miles
14% attribution
Insurance type
Medicaid
12% attribution
Feature attributions are computed automatically for every prediction. No separate tooling required. Learn more about Kumo explainability
PQL Documentation
Learn the Predictive Query Language — SQL-like syntax for defining any prediction task in 2–3 lines.
Python SDK
Integrate Kumo predictions into your pipelines. Train, evaluate, and deploy models programmatically.
Explainability Docs
Understand feature attributions, model evaluation metrics, and how to build trust with stakeholders.
Frequently asked questions
Common questions about no-show prediction
How does AI predict patient no-shows?
AI predicts no-shows by analyzing the relationships between patients, providers, appointment types, scheduling patterns, and visit history. Graph-based models detect that specific combinations of factors (patient + provider + day + booking lead time) predict no-shows at rates far higher than any single factor suggests. This allows targeted interventions rather than blanket reminder systems.
How much do patient no-shows cost healthcare systems?
Patient no-shows cost the US healthcare system $150B annually. A mid-size health system with 200 providers loses $3.2M per year to empty appointment slots. Beyond direct revenue loss, no-shows create scheduling inefficiencies, extend wait times for other patients, and delay care for the patients who miss.
What interventions reduce patient no-show rates?
The most effective interventions are targeted by risk level: high-risk patients get ride assistance offers, schedule-change options, or same-day phone outreach. Medium-risk patients get enhanced reminders with easy rescheduling links. Predictive overbooking fills high-risk slots automatically. Targeted approaches reduce no-show impact by 20-30% compared to 5-10% for universal reminders.
Can AI help with appointment overbooking in healthcare?
Yes. AI-driven overbooking uses per-slot no-show probability predictions to determine how many patients to book for each time slot. A Monday morning specialist slot with a predicted 25% no-show rate might be double-booked, while a Tuesday afternoon primary care slot with 5% predicted no-show rate is not. This fills 15-20% more slots without creating over-capacity problems.
Bottom line: A 200-provider health system recovering 20% of no-show slots through predictive overbooking and targeted outreach saves $3.2M annually. Kumo learns patient-provider-schedule interaction patterns that flat reminder systems cannot capture.
Related use cases
Explore more healthcare use cases
Topics covered
One Platform. One Model. Infinite Predictions.
KumoRFM
Relational Foundation Model
Turn structured relational data into predictions in seconds. KumoRFM delivers zero-shot predictions that rival months of traditional data science. No training, feature engineering, or infrastructure required. Just connect your data and start predicting.
For critical use cases, fine-tune KumoRFM on your data using the Kumo platform and Research Agent for 30%+ higher accuracy than traditional models.
Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.




