Berlin Tech Meetup: The Future of Relational Foundation Models, Systems, and Real-World Applications
30 layer guides covering PyTorch Geometric’s most important GNN layers. From foundational convolutions to state-of-the-art graph transformers.
The default GNN layer. Symmetric normalization, simple averaging.
Kipf & Welling, 2016
Inductive learning with sampling. Scales to millions of nodes.
Hamilton, Ying & Leskovec, 2017
Attention-weighted neighbor aggregation for variable-importance edges.
Velickovic et al., 2017
Fixed attention expressiveness gap from GATConv. Dynamic attention.
Brody et al., 2021
Multi-head attention on graphs with edge features.
Shi et al., 2020
Maximally expressive for graph isomorphism testing.
Xu et al., 2018
GIN with edge features for molecular property prediction.
Hu et al., 2019
Multiple aggregators (mean, max, min, std) for richer representations.
Corso et al., 2020
Per-relation weight matrices for knowledge graphs.
Schlichtkrull et al., 2017
Heterogeneous Graph Transformer with type-specific attention.
Hu et al., 2020
Heterogeneous attention with semantic-level aggregation.
Wang et al., 2019
Flexible wrapper for any layer on heterogeneous graphs.
PyG Team, 2021
General, powerful, scalable graph transformer recipe.
Rampasek et al., 2022
Personalized PageRank propagation. Deep without over-smoothing.
Gasteiger et al., 2018
Skip connections for deep GCNs. 64+ layers without degradation.
Chen et al., 2020
Spectral filtering with Chebyshev polynomials.
Defferrard et al., 2016
Simplified GCN: one matrix multiply, comparable accuracy.
Wu et al., 2019
Cluster-based training for large graphs.
Chiang et al., 2019
LightGCN for recommendation systems. No feature transformation.
He et al., 2020
DeeperGCN with generalized aggregation for deeper models.
Li et al., 2020
Dynamic graph CNN for point cloud learning.
Wang et al., 2018
Edge-conditioned convolution for molecular graphs.
Gilmer et al., 2017
Continuous B-spline kernels for geometric data.
Fey et al., 2017
PointNet layer for 3D point set learning.
Qi et al., 2016
Relational graph attention for multi-relation graphs.
Busbridge et al., 2019
Feature-wise linear modulation for conditional GNNs.
Brockschmidt, 2019
Signed graph convolution for positive/negative edges.
Derr et al., 2018
Convolution on hypergraphs with multi-node edges.
Bai et al., 2019
Attention-based GNN with soft attention coefficients.
Thekumparampil et al., 2018
Direction-aware convolution for heterophilic graphs.
Rossi et al., 2023
KumoRFM’s Relational Graph Transformer combines the best of all these layers. You write one line of PQL.