Executive AI Dinner hosted by Kumo - Austin, April 8

Register here
4Classification · Fraud Detection

Transaction Fraud Detection

Is this transaction fraudulent?

Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.

By submitting, you accept the Terms and Privacy Policy.

Loved by data scientists, ML engineers & CXOs at

Catalina Logo

A real-world example

Is this transaction fraudulent?

US card fraud losses exceeded $12B in 2024 (Nilson Report). Legacy rule-based systems generate 95%+ false-positive rates, blocking legitimate purchases and driving $118B in annual false declines (Aite-Novarica). Every false decline costs the issuer $118 in lost revenue and customer goodwill. Meanwhile, sophisticated fraud rings exploit the blind spots between siloed detection systems, running small test transactions across merchant categories before executing high-value fraud. The data needed to detect these patterns spans cards, merchants, devices, and time-series transaction flows.

Quick answer

The most effective fraud detection models combine cardholder behavior, merchant patterns, device signals, and transaction velocity into a single relational graph. Graph-based ML catches fraud rings and test-then-hit patterns that rule-based systems and single-table models miss. On the SAP SALT benchmark, relational graph ML achieves 91% accuracy vs 63% for rules-based approaches and 75% for XGBoost on flat tables.

Approaches compared

4 ways to solve this problem

1. Rules-based detection

Flag transactions that exceed velocity limits, amount thresholds, or geographic rules (e.g., two transactions in different countries within an hour).

Best for

Fast to deploy, easy to explain to regulators, catches known fraud patterns immediately.

Watch out for

95%+ false-positive rates (Aite-Novarica). Fraudsters learn the rules and stay just below thresholds. Every false decline costs $118 in lost revenue and customer trust.

2. XGBoost on flat transaction features

Engineer features per transaction (amount vs average, time since last transaction, merchant category frequency) and train a gradient-boosted classifier.

Best for

Significant lift over rules. Handles more complex patterns. Industry standard for many fraud teams.

Watch out for

Each transaction is scored in isolation. Misses multi-transaction patterns like test-then-hit (small purchase at coffee shop, then $2,900 at electronics store 19 minutes later) because the features are aggregated per-row.

3. Graph analytics (network-based fraud detection)

Build a transaction-merchant-device graph and compute features like shared-device clusters, merchant risk propagation, and velocity across connected entities.

Best for

Catches fraud rings: multiple cards hitting the same compromised merchant, or devices shared across unrelated cardholders.

Watch out for

Graph features are batch-computed. Real-time fraud scoring needs sub-100ms latency, and recomputing graph metrics per transaction is expensive.

4. KumoRFM (relational graph ML)

Connect cardholders, transactions, merchants, and device signals into a relational graph. The GNN scores each transaction by learning multi-hop patterns across the full graph in real time.

Best for

Catches the full attack pattern: new device + different state + VPN + merchant category never used + velocity spike. Reduces false positives by 40% while catching 25% more fraud than flat models.

Watch out for

Requires device-signal and merchant-level data alongside transaction data. Partial data coverage reduces the multi-hop advantage.

Key metric: SAP SALT benchmark: relational graph ML achieves 91% accuracy vs 63% for rules-based and 75% for XGBoost on flat tables.

Why relational data changes the answer

Fraud is a multi-entity event. The transaction itself is just the trigger. The real signal is in the relationships: this device has never been associated with this cardholder, this merchant category is new for this customer, the IP address is in a different state, and three transactions happened in 19 minutes. No single transaction table contains all of these signals. They are spread across cardholder profiles, device fingerprints, merchant data, and geographic signals.

Relational models read the full transaction-cardholder-merchant-device graph and learn composite attack patterns. On the SAP SALT benchmark, relational graph ML achieves 91% accuracy vs 75% for XGBoost on flat tables and 63% for rules-based systems. The practical impact: 40% fewer false positives (meaning fewer angry customers blocked from legitimate purchases) and 25% more caught fraud (meaning fewer losses slipping through).

Rules-based fraud detection is like airport security checking only whether your bag weighs more than 50 pounds. It catches the obvious cases but misses the traveler with a fake passport, a one-way ticket bought an hour ago, and luggage tagged from a different origin city. Relational ML checks the bag, the passport, the ticket, and the travel history together.

How KumoRFM solves this

Relational intelligence built for banking and financial data

Kumo connects cardholder profiles, transaction histories, merchant data, device fingerprints, and geographic signals into a single relational graph. The model detects that Transaction T-900412 involves a card whose recent velocity spiked 4x, at a merchant category the cardholder has never used, from a device IP in a different state than the cardholder's home region, and the transaction amount matches a known test-then-hit pattern. These multi-hop relational signals catch fraud that single-table models miss while reducing false positives by 40%.

From data to predictions

See the full pipeline in action

Connect your tables, write a PQL query, and get predictions with built-in explainability — all in minutes, not months.

1

Your data

The relational tables Kumo learns from

CARDHOLDERS

cardholder_idnamehome_stateavg_monthly_spendcard_type
CH-4001Susan ChenCA$3,200Platinum
CH-4002Robert JamesTX$1,800Gold
CH-4003Ana RiveraNY$5,100Signature

TRANSACTIONS

txn_idcardholder_idmerchant_idamountchanneltimestamp
T-900410CH-4001M-220$12.50card_present2025-09-15 14:22
T-900411CH-4001M-891$47.00online2025-09-15 14:38
T-900412CH-4001M-3042$2,899online2025-09-15 14:41

MERCHANTS

merchant_idnamecategoryrisk_tiercountry
M-220Corner CoffeeFood & BeverageLowUS
M-891StreamFlixDigital ServicesLowUS
M-3042ElectroMartElectronicsMediumUS

DEVICE_SIGNALS

txn_iddevice_haship_statebrowseris_vpn
T-900410D-8812CASafariFalse
T-900411D-8812CASafariFalse
T-900412D-1199FLChromeTrue
2

Write your PQL query

Describe what to predict in 2–3 lines — Kumo handles the rest

PQL
PREDICT BOOL(TRANSACTIONS.IS_FRAUD = 'True', 0, 0, days)
FOR EACH TRANSACTIONS.TXN_ID
WHERE TRANSACTIONS.AMOUNT > 100
3

Prediction output

Every entity gets a score, updated continuously

TXN_IDCARDHOLDERAMOUNTFRAUD_SCOREDECISION
T-900410Susan Chen$12.500.02Approve
T-900411Susan Chen$47.000.05Approve
T-900412Susan Chen$2,8990.94Block
4

Understand why

Every prediction includes feature attributions — no black boxes

Transaction T-900412 ($2,899 at ElectroMart)

Predicted: 94% fraud probability

Top contributing features

Device mismatch (new device, different state)

FL vs CA

31% attribution

Velocity spike (3 txns in 19 minutes)

4x normal

25% attribution

Merchant category never used before

Electronics

19% attribution

VPN detected on transaction device

True

14% attribution

Amount anomaly vs cardholder pattern

3.6x avg

11% attribution

Feature attributions are computed automatically for every prediction. No separate tooling required. Learn more about Kumo explainability

Frequently asked questions

Common questions about transaction fraud detection

What is the best ML model for fraud detection?

Graph-based ML models that connect transactions, cardholders, merchants, and device signals outperform single-table approaches. On the SAP SALT benchmark, relational graph ML scores 91% accuracy vs 75% for XGBoost on flat tables. The key advantage is detecting multi-entity attack patterns (device mismatch + velocity spike + new merchant category) that flat models cannot see.

How do you reduce false positives in fraud detection?

False positives drop when the model has more context. A $2,900 online electronics purchase looks suspicious in isolation. But if the model also sees that the device, IP location, and merchant category are all inconsistent with the cardholder's history, it can confidently flag fraud. If all those signals match normal behavior, it can confidently approve. Relational ML provides this full-context scoring, reducing false positives by 40%.

Can ML detect fraud in real time?

Yes. Modern relational ML models pre-compute graph embeddings and score new transactions in milliseconds. The model does not rebuild the graph per transaction. It updates incrementally as new transactions flow in, maintaining sub-100ms scoring latency even at millions of transactions per day.

What data do you need for a fraud detection model?

Transaction records are the minimum. For best results, add cardholder profiles, merchant metadata, device fingerprints (device hash, IP, browser, VPN detection), and geographic signals. Each additional data source closes a blind spot that fraudsters can exploit.

How much does transaction fraud cost banks?

US card fraud losses exceeded $12B in 2024 (Nilson Report). But the indirect cost is even larger: false declines cost $118B annually (Aite-Novarica) because each blocked legitimate transaction costs the issuer $118 in lost revenue and customer goodwill. Better models save money on both sides of the equation.

Bottom line: Reduce false positives by 40% and catch 25% more fraud, saving $150-250M annually for a top-10 issuer while recovering $118 in revenue per avoided false decline.

Topics covered

transaction fraud detection AIreal-time fraud detection bankingpayment fraud machine learningcard fraud predictiongraph neural network fraudKumoRFMfraud analytics financial servicesrelational deep learning fraudfalse positive reduction fraudfraud scoring model

One Platform. One Model. Infinite Predictions.

KumoRFM

Relational Foundation Model

Turn structured relational data into predictions in seconds. KumoRFM delivers zero-shot predictions that rival months of traditional data science. No training, feature engineering, or infrastructure required. Just connect your data and start predicting.

For critical use cases, fine-tune KumoRFM on your data using the Kumo platform and Research Agent for 30%+ higher accuracy than traditional models.

Book a demo and get a free trial of the full platform: research agent, fine-tune capabilities, and forward-deployed engineer support.