Seasonal Trend Prediction
“What will total revenue be for each product category over the next quarter?”
Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.
By submitting, you accept the Terms and Privacy Policy.

Loved by data scientists, ML engineers & CXOs at

A real-world example
What will total revenue be for each product category over the next quarter?
Seasonal planning based on year-over-year comparisons misses emerging trends, promotional lifts, and cross-category cannibalization. A 10% forecast error at the category level can mean $10–50M in misallocated inventory and marketing spend across a large retailer. When electronics surge because of a product launch while home goods soften, the YoY model sees neither shift until it is too late.
Quick answer
Seasonal trend prediction forecasts quarterly revenue by product category, capturing emerging trends, promotional lifts, and cross-category cannibalization that year-over-year comparisons miss. Graph-based models connect categories to sales, products, promotions, and external signals, detecting share shifts between categories before they show up in aggregate numbers.
Approaches compared
4 ways to solve this problem
1. Year-Over-Year Comparison
Project next quarter's revenue as this quarter last year multiplied by a growth rate. The standard approach for most finance and merchandising teams.
Best for
Stable businesses with consistent year-over-year growth and minimal category-level disruption.
Watch out for
Misses everything that changed since last year: new product launches, promotional calendar shifts, competitor moves, and cross-category cannibalization. A 10% forecast error at the category level means $10-50M in misallocated resources for large retailers.
2. Seasonal Decomposition (STL, X-13)
Decompose each category's revenue into trend, seasonal, and residual components. Recompose for the forecast. The statistical approach used by planning teams with time-series expertise.
Best for
Categories with strong, stable seasonal patterns and several years of clean history.
Watch out for
Treats each category independently. Cannot detect that Electronics is cannibalizing Accessories, or that a promotional calendar change is shifting seasonal peaks earlier. These cross-category dependencies are invisible to decomposition methods.
3. ML Regression with External Features
Train XGBoost or similar models on features including lagged revenue, promotional spend, competitor pricing, and macroeconomic indicators. Captures non-linear relationships within a single table.
Best for
Teams with feature engineering resources and access to external data sources like consumer sentiment and competitor pricing.
Watch out for
Cross-category cannibalization and share-shift dynamics require explicit feature engineering. You need to manually create features like 'Electronics promotional spend as a share of total' and 'new product launches in adjacent categories.' This is error-prone and slow to iterate.
4. KumoRFM (Graph Neural Networks on Relational Data)
Connects categories to sales, products, promotions, and external signals in a relational graph. Learns cross-category demand dynamics, promotional lifts, and share shifts automatically.
Best for
Multi-category retailers where seasonal patterns interact with promotional calendars, new product launches, and cross-category competition.
Watch out for
The graph advantage is largest when categories have meaningful interactions (shared customers, substitution effects, co-purchase patterns). For fully independent product lines with no cross-category dynamics, simpler models may suffice.
Key metric: Graph-based seasonal models reduce category-level forecast error by 30-40%, driven by cross-category cannibalization signals and promotional interaction effects that year-over-year models structurally miss.
Why relational data changes the answer
Category CAT-10 (Electronics) is forecast at $4.2M for next quarter. A year-over-year model based on last year's $3.6M might predict $3.8M with modest growth. But the relational graph sees several compounding factors: two overlapping holiday promotions are scheduled (PROMOTIONS table), which historically amplify Electronics more than other categories. The Wearables category (CAT-30) has a new product launch that is pulling share from Accessories, and some of that displaced demand flows to Electronics peripherals. Consumer sentiment indices are positive, correlating with 8-12% Electronics uplift in prior periods.
Critically, the cross-category dynamics are bidirectional. While Electronics gains share, Home Office (CAT-20) is cannibalizing Furniture. The same customers who buy home office equipment are spending less on traditional furniture, a substitution pattern that only appears when you join the SALES table to CUSTOMERS and then to CATEGORIES. Graph neural networks discover these multi-hop category interactions automatically. They learn that when Electronics gains share, Accessories loses 3.2%, and that promotional calendar overlap amplifies this effect. On the RelBench benchmark, relational models score 76.71 vs 62.44 for flat-table approaches. For seasonal trend prediction, where cross-category dynamics are the primary source of error in traditional models, the improvement is often even larger.
Forecasting seasonal trends with year-over-year is like predicting next summer's weather by looking at last summer. You get the broad shape right but miss everything that changed: a new building blocking wind patterns (competitor), a drought upstream (supply chain shift), and a festival that was not there before (promotional event). The seasonal pattern is a baseline, not a forecast.
How KumoRFM solves this
Relational intelligence for every forecast
Kumo learns from the relational graph connecting categories to sales, products, promotions, and external signals. Instead of treating each category as an isolated time series, Kumo sees that the Electronics category's Q4 surge is amplified by an overlapping holiday promotion, that Home Office is cannibalizing Furniture, and that a new product launch in Wearables is pulling share from Accessories. These cross-category and cross-signal dependencies produce quarterly forecasts that capture the full picture.
From data to predictions
See the full pipeline in action
Connect your tables, write a PQL query, and get predictions with built-in explainability — all in minutes, not months.
Your data
The relational tables Kumo learns from
CATEGORIES
| category_id | category_name | department |
|---|---|---|
| CAT-10 | Electronics | Technology |
| CAT-20 | Home Office | Furniture |
| CAT-30 | Wearables | Accessories |
SALES
| sale_id | product_id | category_id | revenue | units | timestamp |
|---|---|---|---|---|---|
| SL-7001 | PRD-401 | CAT-10 | $249.99 | 1 | 2025-09-14 |
| SL-7002 | PRD-502 | CAT-20 | $189.00 | 1 | 2025-09-14 |
| SL-7003 | PRD-610 | CAT-30 | $89.95 | 2 | 2025-09-15 |
PROMOTIONS
| promo_id | category_id | discount_pct | start_date | end_date |
|---|---|---|---|---|
| PRM-01 | CAT-10 | 15 | 2025-11-20 | 2025-12-01 |
| PRM-02 | CAT-20 | 10 | 2025-10-01 | 2025-10-15 |
| PRM-03 | CAT-30 | 20 | 2025-11-25 | 2025-12-02 |
Write your PQL query
Describe what to predict in 2–3 lines — Kumo handles the rest
PREDICT SUM(SALES.REVENUE, 0, 90, days) FOR EACH CATEGORIES.CATEGORY_ID
Prediction output
Every entity gets a score, updated continuously
| CATEGORY_ID | TIMESTAMP | TARGET_PRED |
|---|---|---|
| CAT-10 | 2025-10-01 | $4.2M |
| CAT-20 | 2025-10-01 | $1.8M |
| CAT-30 | 2025-10-01 | $890K |
Understand why
Every prediction includes feature attributions — no black boxes
Category CAT-10 (Electronics)
Predicted: $4.2M revenue in next quarter
Top contributing features
Prior year same quarter
$3.6M
28% attribution
Promotional calendar overlap
2 promos
24% attribution
Cross-category trend (share gain)
+3.2%
20% attribution
Macro consumer sentiment
Positive
16% attribution
New product launches
4 SKUs
12% attribution
Feature attributions are computed automatically for every prediction. No separate tooling required. Learn more about Kumo explainability
PQL Documentation
Learn the Predictive Query Language — SQL-like syntax for defining any prediction task in 2–3 lines.
Python SDK
Integrate Kumo predictions into your pipelines. Train, evaluate, and deploy models programmatically.
Explainability Docs
Understand feature attributions, model evaluation metrics, and how to build trust with stakeholders.
Frequently asked questions
Common questions about seasonal trend prediction
How much does seasonal forecasting improve with AI?
Graph-based seasonal models reduce category-level forecast error by 30-40% compared to year-over-year methods and 15-25% compared to statistical decomposition. The improvement is concentrated in categories experiencing share shifts, new product launches, or promotional calendar changes.
Can AI detect cross-category cannibalization?
Yes. Graph models connect categories to shared customers and shared sales events. When customers shift spending from one category to another, the graph captures this substitution pattern. Traditional models that forecast each category independently are structurally blind to cannibalization effects.
How do promotional calendars affect seasonal forecasts?
Promotions can shift seasonal peaks by 1-2 weeks and amplify or suppress demand by 15-40%. Graph models learn the historical interaction between promotion type, category, and seasonal timing. Two overlapping promotions in the same category often produce sub-additive lift (less than the sum of individual promotions), a non-linearity that additive models miss.
What data improves seasonal trend prediction the most?
Beyond historical sales, the highest-value additions are: promotional calendars (when and what discounts are planned), new product launch schedules, competitor pricing data, and macroeconomic indicators like consumer confidence. Each additional data source reduces forecast error by 3-8% incrementally.
Bottom line: Capture emerging trends and promotional lifts that YoY comparisons miss — reduce category-level forecast error by 30–40%.
Related use cases
Explore more forecasting use cases
Topics covered
One Platform. One Model. Infinite Predictions.
KumoRFM
Relational Foundation Model
Turn structured relational data into predictions in seconds. KumoRFM delivers zero-shot predictions that rival months of traditional data science. No training, feature engineering, or infrastructure required. Just connect your data and start predicting.
For critical use cases, fine-tune KumoRFM on your data using the Kumo platform and Research Agent for 30%+ higher accuracy than traditional models.
Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.




