Berlin Tech Meetup: The Future of Relational Foundation Models, Systems, and Real-World Applications

Register now:
PyG Guide

All GNN Layers

30 layer guides covering PyTorch Geometric’s most important GNN layers. From foundational convolutions to state-of-the-art graph transformers.

66 total layers in PyG30 guides

GCNConv

Core

The default GNN layer. Symmetric normalization, simple averaging.

Kipf & Welling, 2016

SAGEConv

Core

Inductive learning with sampling. Scales to millions of nodes.

Hamilton, Ying & Leskovec, 2017

GATConv

Attention

Attention-weighted neighbor aggregation for variable-importance edges.

Velickovic et al., 2017

GATv2Conv

Attention

Fixed attention expressiveness gap from GATConv. Dynamic attention.

Brody et al., 2021

TransformerConv

Attention

Multi-head attention on graphs with edge features.

Shi et al., 2020

GINConv

Expressiveness

Maximally expressive for graph isomorphism testing.

Xu et al., 2018

GINEConv

Expressiveness

GIN with edge features for molecular property prediction.

Hu et al., 2019

PNAConv

Expressiveness

Multiple aggregators (mean, max, min, std) for richer representations.

Corso et al., 2020

RGCNConv

Heterogeneous

Per-relation weight matrices for knowledge graphs.

Schlichtkrull et al., 2017

HGTConv

Heterogeneous

Heterogeneous Graph Transformer with type-specific attention.

Hu et al., 2020

HANConv

Heterogeneous

Heterogeneous attention with semantic-level aggregation.

Wang et al., 2019

HeteroConv

Heterogeneous

Flexible wrapper for any layer on heterogeneous graphs.

PyG Team, 2021

GPSConv

Transformer

General, powerful, scalable graph transformer recipe.

Rampasek et al., 2022

APPNP

Spectral

Personalized PageRank propagation. Deep without over-smoothing.

Gasteiger et al., 2018

GCN2Conv

Spectral

Skip connections for deep GCNs. 64+ layers without degradation.

Chen et al., 2020

ChebConv

Spectral

Spectral filtering with Chebyshev polynomials.

Defferrard et al., 2016

SGConv

Spectral

Simplified GCN: one matrix multiply, comparable accuracy.

Wu et al., 2019

ClusterGCNConv

Scalable

Cluster-based training for large graphs.

Chiang et al., 2019

LGConv

Scalable

LightGCN for recommendation systems. No feature transformation.

He et al., 2020

GENConv

Scalable

DeeperGCN with generalized aggregation for deeper models.

Li et al., 2020

EdgeConv

Point Cloud

Dynamic graph CNN for point cloud learning.

Wang et al., 2018

NNConv

Molecular

Edge-conditioned convolution for molecular graphs.

Gilmer et al., 2017

SplineConv

Point Cloud

Continuous B-spline kernels for geometric data.

Fey et al., 2017

PointNetConv

Point Cloud

PointNet layer for 3D point set learning.

Qi et al., 2016

RGATConv

Heterogeneous

Relational graph attention for multi-relation graphs.

Busbridge et al., 2019

FiLMConv

Heterogeneous

Feature-wise linear modulation for conditional GNNs.

Brockschmidt, 2019

SignedConv

Special

Signed graph convolution for positive/negative edges.

Derr et al., 2018

HypergraphConv

Special

Convolution on hypergraphs with multi-node edges.

Bai et al., 2019

AGNNConv

Attention

Attention-based GNN with soft attention coefficients.

Thekumparampil et al., 2018

DirGNNConv

Special

Direction-aware convolution for heterophilic graphs.

Rossi et al., 2023

Skip the layer selection. Get predictions in seconds.

KumoRFM’s Relational Graph Transformer combines the best of all these layers. You write one line of PQL.