- a natural-language summary that explains the prediction in plain English
- structured details that let you inspect the most important columns, cohorts, and subgraphs behind that prediction
Generate Explanations
KumoRFM can generate explanations alongside predictions by using theexplain parameter.
For example, a query like the following predicts whether an order will have a return within the next 30 days:
explain=True is enabled, the return type changes from a prediction DataFrame to an Explanation object. That object still contains the prediction output, but it also includes human-readable and structured explanation data.
Skip the summary (faster, returns only prediction details):
Working with the Explanation Object
TheExplanation object supports indexing for convenience:
explanation.prediction: the original predictionDataFrameexplanation.summary: a readable narrative of the key driversexplanation.details: structured explainability outputs for deeper inspection
Understanding the Summary
The default summary is designed to answer a business-facing question: Why did KumoRFM make this prediction? In the notebook example, the summary highlights the most important signals behind the prediction, such as:- the order date
- the sales channel
- the order price
- user characteristics
- item characteristics
Structured Explanation Details
The notebook also shows thatexplanation.details contains richer structured outputs. In particular, it breaks the explanation into two useful views:
- column analysis via
details.cohorts - subgraph analysis via
details.subgraphs
Column Analysis
Column analysis gives you a global view of how values in a column relate to outcomes across the in-context examples KumoRFM used for the prediction. Each cohort object includes fields such as:table_name: which table is being analyzedcolumn_name: the feature or aggregate being analyzedhop: how far the table is from the entity tablestype: the semantic type, such as numerical, categorical, or timestampcohorts: the value buckets or categoriespopulations: how much of the sampled context falls into each cohorttargets: the average target value associated with each cohort
- Which value ranges generally increase or decrease risk?
- Which categories are most associated with a positive outcome?
- Which features appear to matter globally across similar examples?
Subgraph Analysis
Subgraph analysis gives you a local view of the relational evidence around the specific entity being predicted. In the notebook, the subgraph explanation is used to visualize the most important neighboring records and edges around the seed entity. This helps you see not just which columns mattered, but which nearby records in the graph were most influential. This is especially useful when:- the query depends on several linked tables
- the prediction is driven by specific related records
- you want to visualize the local graph neighborhood for debugging or presentation purposes
When to Use Each Explanation Type
Use the natural-language summary when you want a fast explanation for a single prediction. Use cohort analysis when you want to understand broader feature patterns in the sampled context. Use subgraph analysis when you want to understand which linked records and paths in the graph influenced the result.Configuration Tips
If you need maximum speed, useExplainConfig(skip_summary=True) and work only with the structured explanation details.
If you are debugging a specific prediction, start with the summary and then inspect details.cohorts and details.subgraphs to validate the model’s reasoning.
Current Limitations
Explainability is currently supported only for single-entity predictions with
run_mode="FAST".