Original Paper
Edge Directionality Improves Learning on Heterophilic Graphs
Rossi et al. (2023). LoG 2023
Read paper →What DirGNNConv does
DirGNNConv is a meta-layer that adds directionality to any GNN:
- Split edges into incoming (j → i) and outgoing (i → j)
- Run the base layer separately on incoming and outgoing subgraphs
- Combine: h_i' = alpha * h_in + (1 - alpha) * h_out, where alpha is learnable
The math (simplified)
# Separate message passing by direction
h_i^in = Conv(x, edge_index_in) # messages from predecessors
h_i^out = Conv(x, edge_index_out) # messages from successors
# Learnable directional balance
h_i' = alpha · h_i^in + (1 - alpha) · h_i^out
Where:
edge_index_in = edges pointing TO node i
edge_index_out = edges pointing FROM node i
Conv = any base GNN layer (GCN, GAT, SAGE, etc.)
alpha = learnable scalar in [0, 1]
When alpha = 0.5: symmetric (equivalent to undirected)
When alpha = 1.0: only incoming edges (who references me?)
When alpha = 0.0: only outgoing edges (who do I reference?)The learned alpha reveals the directional bias: in citation networks, incoming citations (being cited) typically matter more than outgoing ones.
PyG implementation
import torch
import torch.nn.functional as F
from torch_geometric.nn import DirGNNConv, GCNConv
class DirGNN(torch.nn.Module):
def __init__(self, in_channels, hidden, out_channels):
super().__init__()
# Wrap GCNConv with directional splitting
self.conv1 = DirGNNConv(GCNConv(in_channels, hidden))
self.conv2 = DirGNNConv(GCNConv(hidden, out_channels))
def forward(self, x, edge_index):
x = F.relu(self.conv1(x, edge_index))
x = self.conv2(x, edge_index)
return x
# Works with any base layer
from torch_geometric.nn import GATConv, SAGEConv
dir_gat = DirGNNConv(GATConv(64, 32, heads=4))
dir_sage = DirGNNConv(SAGEConv(64, 32))
model = DirGNN(dataset.num_features, 64, dataset.num_classes)DirGNNConv wraps any MessagePassing layer. The base layer handles message computation; DirGNNConv handles directional splitting and combination.
When to use DirGNNConv
- Directed graphs with semantic direction. Citation networks (citing vs being cited), web graphs (linking vs being linked), follower networks (following vs being followed).
- Heterophilic graphs. When connected nodes tend to have different labels, directional information helps distinguish different roles (e.g., hub vs authority in web graphs).
- When symmetrization hurts. If converting your directed graph to undirected degrades performance, DirGNNConv recovers the lost directional signal.
- Adding directionality to existing models. Wrap your existing GCNConv/GATConv model with DirGNNConv to see if directional information helps, without rewriting the architecture.
When not to use DirGNNConv
- Naturally undirected graphs. Molecular bonds, co-purchase edges, and co-authorship edges are inherently symmetric. Direction adds complexity without benefit.
- When symmetrization already works well. On many benchmarks, undirected GCN performs nearly as well as directional models. Check if directionality helps before committing.