Marketing Budget Allocation
“How much revenue will each marketing campaign generate over the next 30 days?”
Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.
By submitting, you accept the Terms and Privacy Policy.

Loved by data scientists, ML engineers & CXOs at

A real-world example
How much revenue will each marketing campaign generate over the next 30 days?
Marketing teams allocate budgets based on last-touch attribution and historical ROAS, missing the relational signals that drive true incrementality. 30–40% of ad spend is typically wasted on campaigns that would have converted anyway. Accurate revenue-per-campaign predictions let you reallocate millions to the channels that actually drive incremental revenue — but only if you understand the audience overlap, channel saturation, and customer lifetime value context behind each campaign.
Quick answer
Marketing budget allocation predicts how much revenue each campaign will generate over the next 30 days, enabling reallocation from low-performing to high-performing channels. Graph-based models separate true incremental revenue from organic conversions by learning audience overlap, channel saturation, and customer lifetime value context that last-touch attribution misses.
Approaches compared
4 ways to solve this problem
1. Last-Touch Attribution + Historical ROAS
Credit the last touchpoint before conversion with 100% of the revenue. Allocate budget to channels with the highest historical return on ad spend. The default approach for most marketing teams.
Best for
Simple campaigns with single-channel customer journeys where the last touch is genuinely the conversion driver.
Watch out for
Massively over-credits retargeting and branded search while under-crediting awareness channels. 30-40% of attributed revenue would have converted organically. Leads to systematic over-investment in bottom-funnel channels.
2. Multi-Touch Attribution (MTA)
Spread credit across all touchpoints in the customer journey using rules (linear, time-decay, U-shaped). More sophisticated than last-touch but still relies on arbitrary weighting.
Best for
Teams that want a fairer view of the full funnel without building ML models. A reasonable step up from last-touch.
Watch out for
The credit allocation rules are arbitrary. A time-decay model that gives 40% to the last touch and 20% to the first is just a different guess than last-touch. No MTA model measures whether the touchpoint actually caused the conversion.
3. Marketing Mix Modeling (MMM)
Regression-based models that correlate aggregate channel spend with total conversions over time. Accounts for diminishing returns and channel saturation at the macro level.
Best for
CMOs who need portfolio-level budget allocation decisions across online and offline channels. Good at measuring channel saturation.
Watch out for
Operates at aggregate level, not individual campaigns. Cannot tell you which specific campaign to increase or decrease. Requires 2+ years of historical data and struggles with new channels. Typically updated quarterly, too slow for tactical optimization.
4. KumoRFM (Graph Neural Networks on Relational Data)
Connects campaigns to conversions, customers, segments, and channel histories in a relational graph. Predicts revenue per campaign by learning audience overlap, channel saturation, and customer journey context. Separates organic from incremental conversions.
Best for
Marketing teams running 10+ campaigns across multiple channels who need campaign-level (not just channel-level) revenue predictions.
Watch out for
Requires campaign-level conversion tracking with customer-level attribution. If your data only has channel-level aggregates without customer IDs, you need MMM instead.
Key metric: Graph-based campaign revenue models reveal that 30-40% of attributed conversions are organic, enabling budget reallocation that generates $3-8M more revenue from the same total spend.
Why relational data changes the answer
Campaign C-301 (Fall Retargeting, Display) has a last-touch ROAS of 2.8x. But the relational graph reveals that 60% of its audience overlaps with high-LTV customers who would have converted organically. The true incremental ROAS is closer to 1.1x. Meanwhile, Campaign C-450 (New Segment Push, Social) targets a fresh audience with only 12% overlap with existing customers. Its last-touch ROAS looks lower at 1.9x, but the true incremental ROAS is 4.2x because almost every conversion is genuinely new.
These audience overlap and incrementality signals require joining the CAMPAIGNS table to CUSTOMERS to CONVERSIONS and computing overlap between campaign audiences. A flat model that scores each campaign on its own conversion data cannot see this overlap. The graph neural network learns from the full customer-campaign-conversion graph, discovering that Campaign C-450's audience has low existing brand awareness (from CUSTOMERS.signup_date), low channel saturation (from TOUCHPOINTS history), and high creative engagement (from click data). These compound signals produce campaign revenue predictions that account for true incrementality, not just attributed conversions. The result: marketing teams reallocate 30-40% of budget from over-credited campaigns to genuinely incremental ones.
Allocating budget with last-touch attribution is like a basketball coach giving all credit for a win to the player who made the last shot. The relational model sees the full game: the assists, the defense that created turnovers, and the plays that got the ball to the shooter. Some players score a lot on easy baskets (retargeting). Others create opportunities that never show up in their stats (awareness campaigns). You want to invest in the players who change the outcome of the game.
How KumoRFM solves this
Relational intelligence for every forecast
Kumo connects campaigns to conversions, customers, segments, and channel histories in a single relational graph. Instead of scoring each campaign on its own last-touch ROAS, Kumo learns that Campaign C-301's audience overlaps 60% with high-LTV customers who would convert organically, while Campaign C-450 reaches an untapped segment with genuine incremental lift. The graph captures channel saturation, creative fatigue, and customer journey context that flat attribution models cannot see.
From data to predictions
See the full pipeline in action
Connect your tables, write a PQL query, and get predictions with built-in explainability — all in minutes, not months.
Your data
The relational tables Kumo learns from
CAMPAIGNS
| campaign_id | campaign_name | channel | budget | start_date |
|---|---|---|---|---|
| C-301 | Fall Retargeting | Display | $50,000 | 2025-09-01 |
| C-302 | Brand Search Q4 | Paid Search | $120,000 | 2025-10-01 |
| C-450 | New Segment Push | Social | $35,000 | 2025-09-15 |
CONVERSIONS
| conversion_id | campaign_id | customer_id | revenue | timestamp |
|---|---|---|---|---|
| CVR-9001 | C-301 | CUST-882 | $124.50 | 2025-09-18 |
| CVR-9002 | C-302 | CUST-1105 | $89.00 | 2025-10-03 |
| CVR-9003 | C-450 | CUST-2040 | $215.00 | 2025-09-20 |
CUSTOMERS
| customer_id | segment | ltv_tier | signup_date |
|---|---|---|---|
| CUST-882 | Returning | Gold | 2023-03-12 |
| CUST-1105 | Returning | Silver | 2024-01-08 |
| CUST-2040 | New | Bronze | 2025-09-14 |
Write your PQL query
Describe what to predict in 2–3 lines — Kumo handles the rest
PREDICT SUM(CONVERSIONS.REVENUE, 0, 30, days) FOR EACH CAMPAIGNS.CAMPAIGN_ID
Prediction output
Every entity gets a score, updated continuously
| CAMPAIGN_ID | TIMESTAMP | TARGET_PRED |
|---|---|---|
| C-301 | 2025-10-01 | $142K |
| C-302 | 2025-10-01 | $38K |
| C-450 | 2025-10-01 | $215K |
Understand why
Every prediction includes feature attributions — no black boxes
Campaign C-450 (New Segment Push)
Predicted: $215K revenue in next 30 days
Top contributing features
Audience overlap with high-LTV segment
12%
30% attribution
Channel saturation (Social)
Low
25% attribution
Campaign recency (fresh audience)
5 days
20% attribution
Creative engagement rate
4.8%
15% attribution
Customer segment growth
+22%
10% attribution
Feature attributions are computed automatically for every prediction. No separate tooling required. Learn more about Kumo explainability
PQL Documentation
Learn the Predictive Query Language — SQL-like syntax for defining any prediction task in 2–3 lines.
Python SDK
Integrate Kumo predictions into your pipelines. Train, evaluate, and deploy models programmatically.
Explainability Docs
Understand feature attributions, model evaluation metrics, and how to build trust with stakeholders.
Frequently asked questions
Common questions about marketing budget allocation
How much marketing budget is typically wasted?
Industry research consistently shows 30-40% of digital ad spend goes to campaigns that would have converted organically. Branded search and retargeting are the worst offenders, often taking credit for conversions that were already in progress. Graph-based models reveal this waste by measuring true incremental lift per campaign.
What is the difference between ROAS and incremental ROAS?
ROAS measures total revenue attributed to a campaign divided by spend. Incremental ROAS measures only the revenue that would not have happened without the campaign. A retargeting campaign with 3x ROAS may have only 1.2x incremental ROAS if most of those customers would have purchased anyway. Incremental ROAS is the metric that actually matters for budget allocation.
Can AI predict revenue for campaigns that have not launched yet?
Yes. By learning from the audience, channel, and creative characteristics of past campaigns, graph models can predict revenue for new campaigns before they launch. The prediction accuracy depends on how similar the new campaign is to historical campaigns in the training data. Entirely new channels or audience segments have wider prediction intervals.
How often should marketing budgets be reallocated?
Weekly reallocation based on predicted next-30-day revenue per campaign is the sweet spot. Monthly is too slow to catch declining campaign performance. Daily is too reactive and does not give campaigns enough time to demonstrate results. The graph model updates predictions daily, and the marketing team acts on the weekly summary.
Bottom line: Redirect 30–40% of wasted ad spend to truly incremental campaigns — turning the same budget into millions more in revenue.
Related use cases
Explore more forecasting use cases
Topics covered
One Platform. One Model. Infinite Predictions.
KumoRFM
Relational Foundation Model
Turn structured relational data into predictions in seconds. KumoRFM delivers zero-shot predictions that rival months of traditional data science. No training, feature engineering, or infrastructure required. Just connect your data and start predicting.
For critical use cases, fine-tune KumoRFM on your data using the Kumo platform and Research Agent for 30%+ higher accuracy than traditional models.
Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.




