Berlin Tech Meetup: The Future of Relational Foundation Models, Systems, and Real-World Applications

Register now:
PyG/Guide8 min read

Heterophily: When Connected Nodes Have Different Labels

Heterophily breaks the assumption baked into most GNN layers. When neighbors have different labels, aggregation acts as noise, not signal. Handling heterophily requires architectures that can learn to differentiate rather than smooth.

PyTorch Geometric

TL;DR

  • 1Heterophily means connected nodes tend to have different labels (edge homophily ratio < 0.5). Standard GNNs fail because averaging different-label neighbors destroys class-specific signal.
  • 2On strongly heterophilous graphs, a simple MLP ignoring graph structure can outperform GCNConv. This is a diagnostic signal: if MLP > GNN, suspect heterophily.
  • 3Three solutions: (1) Separate ego/neighbor representations before combining, (2) Use higher-order neighborhoods where homophily may exist, (3) Use signed message passing that can subtract neighbor influence.
  • 4Enterprise data often has heterophilous substructures: customer-product bipartite edges, cross-type supply chain links, fraud-victim interactions.
  • 5Graph transformers handle both homophily and heterophily because attention naturally learns whether to align with or differentiate from each neighbor.

Heterophily is the graph property where connected nodes tend to have different labels or dissimilar features, and it is the regime where standard GNN architectures perform worst. Most GNN layers aggregate neighbor features through averaging (GCNConv) or weighted averaging (GATConv). Under heterophily, neighbors have different labels, so averaging mixes conflicting class signals. The node's own informative features get diluted by neighbor noise. After 2-3 layers, the class-specific signal can be destroyed entirely.

Why it matters for enterprise data

Enterprise relational databases frequently contain heterophilous relationships:

  • Customer-product graphs: A customer buying from many product categories creates edges between the customer (one behavior profile) and diverse products (different category labels).
  • Supply chain networks: Manufacturers connect to distributors connect to retailers. Each entity type has different characteristics and labels.
  • Fraud-victim interactions: Fraudsters target victims with different risk profiles. The edge between fraudster and victim is inherently heterophilous.

Applying a standard GCNConv to these graphs can perform worse than ignoring graph structure entirely. Recognizing heterophily and choosing the right architecture is critical for enterprise GNN deployment.

How heterophily degrades GNN performance

Consider node classification on a graph with edge homophily h = 0.2 (80% of edges cross class boundaries):

  • Layer 0: Each node has its own features, distinct per class. An MLP would classify correctly.
  • Layer 1 (GCNConv): Each node averages its neighbors' features. Since 80% of neighbors are different-class, the averaged features are dominated by the wrong class signal.
  • Layer 2: Averaging again further dilutes the original signal. The node's representation now reflects the majority class of its 2-hop neighborhood, not its own class.

Solutions for heterophilous graphs

heterophily_architectures.py
import torch
import torch.nn.functional as F
from torch_geometric.nn import GCNConv

class SeparateAggGNN(torch.nn.Module):
    """Handle heterophily by keeping ego and neighbor features separate."""
    def __init__(self, in_dim, hidden_dim, out_dim):
        super().__init__()
        # Separate transforms for self and neighbors
        self.self_lin = torch.nn.Linear(in_dim, hidden_dim)
        self.neigh_conv = GCNConv(in_dim, hidden_dim)
        # Learnable combination
        self.combine = torch.nn.Linear(hidden_dim * 2, out_dim)

    def forward(self, x, edge_index):
        # Keep ego representation separate from aggregated neighbors
        x_self = F.relu(self.self_lin(x))
        x_neigh = F.relu(self.neigh_conv(x, edge_index))
        # Concatenate and let the model learn how to combine
        x_combined = torch.cat([x_self, x_neigh], dim=-1)
        return self.combine(x_combined)

# Key insight: by separating self and neighbor features,
# the model can learn to IGNORE neighbor signal when it conflicts
# (heterophily) or REINFORCE it when it agrees (homophily).

Separating ego and neighbor representations is the simplest heterophily-aware technique. The model learns whether neighbors help or hurt for each feature dimension.

Approach 1: Ego-neighbor separation

Process a node's own features and its aggregated neighbor features through separate neural networks, then combine them. This lets the model learn to ignore or even negate the neighbor signal when it conflicts with the ego signal.

Approach 2: Higher-order neighborhoods

Even if 1-hop neighbors are heterophilous, 2-hop neighbors might be homophilous. In a bipartite graph, nodes 2 hops away are the same type (customer → product → customer). Concatenating features from multiple hop distances lets the model find the right aggregation scale.

Approach 3: Graph transformers

Graph transformers with attention can learn to assign negative effective weights to heterophilous neighbors, effectively subtracting their influence. This happens naturally through the attention mechanism without explicit architectural modifications.

Limitations and what comes next

  1. Mixed homophily is the norm: Real enterprise graphs have both homophilous and heterophilous edges. Global metrics like edge homophily ratio oversimplify. Per-edge or per-node homophily is more informative but harder to optimize for.
  2. Heterophily benchmarks are limited: Academic benchmarks (Texas, Wisconsin, Cornell) are small and have high label noise. Results on these datasets do not always transfer to enterprise-scale heterophilous graphs.
  3. Detection requires labels: You cannot measure homophily without labels. For unsupervised tasks, you must infer heterophily from feature similarity or use architecture-agnostic approaches.

KumoRFM's Relational Graph Transformer handles both homophily and heterophily through global attention that adapts to local graph properties. It achieves strong performance on RelBench tasks regardless of the underlying homophily pattern.

Frequently asked questions

What is heterophily in graphs?

Heterophily is the property where connected nodes tend to have different labels or dissimilar features. It is the opposite of homophily. In a heterophilous graph, averaging a node's neighbors mixes signals from different classes, which degrades standard GNN performance. Examples include predator-prey networks (different species connect), dating networks (opposite genders connect), and some fraud patterns where legitimate and fraudulent entities interact.

Why do standard GNNs fail on heterophilous graphs?

Standard GNN layers (GCNConv, GATConv) aggregate neighbor features, which under homophily acts as denoising (averaging similar signals). Under heterophily, aggregation acts as noise injection: the node's own signal gets diluted by conflicting signals from different-label neighbors. After multiple layers, the original class-specific information is destroyed. On strongly heterophilous graphs, an MLP ignoring graph structure can outperform GCNConv.

How do you handle heterophily in GNNs?

Three main approaches: (1) Separate ego and neighbor representations before combining (H2GCN, LINKX). (2) Learn to mix higher-order neighborhoods where homophily may exist at 2-hop or 3-hop distances. (3) Use signed message passing where the model can both add and subtract neighbor information. Graph transformers handle heterophily naturally because attention can learn to ignore or invert neighbor signals.

How common is heterophily in enterprise data?

More common than benchmark datasets suggest. Customer-product bipartite graphs are often heterophilous (diverse purchases). Supply chain graphs connecting different entity types (manufacturer-distributor-retailer) are structurally heterophilous. Even fraud graphs can have heterophilous substructures where fraudsters specifically target different-profile victims.

How do you detect heterophily in your graph?

Compute the edge homophily ratio: fraction of edges connecting same-label nodes. If h < 0.5, the graph is heterophilous. Also compare GCNConv performance against a simple MLP (no graph structure). If the MLP wins, heterophily is likely the cause. PyG provides homophily() for measurement.

Learn more about graph ML

PyTorch Geometric is the open-source foundation for graph neural networks. Explore more layers, concepts, and production patterns.