Berlin Tech Meetup: The Future of Relational Foundation Models, Systems, and Real-World Applications

Register now:
PyG/Dataset7 min read

Peptides-func: Testing Long-Range Dependencies in Graph Neural Networks

Peptides-func is a dataset of 15,535 peptide graphs where functional classification requires information from distant parts of the molecular chain. It is part of the Long Range Graph Benchmark (LRGB) that exposed a critical limitation of standard GNNs: they cannot see far enough.

PyTorch Geometric

TL;DR

  • 1Peptides-func has 15,535 peptide graphs averaging 151 atoms (nodes) and 307 bonds (edges). Nodes have 10 features encoding atom properties. The task is multi-label function classification.
  • 2Long-range benchmark: functional sites are far apart in the graph. Standard 2-3 layer GNNs only see local neighborhoods and miss these long-range patterns.
  • 3Graph transformers (GPS ~65% AP) significantly outperform standard GNNs (GCN ~59% AP) because global attention captures distant dependencies directly.
  • 4Long-range dependencies are ubiquitous in real data: customer journeys span months, fraud chains cross many hops, and supply chain effects propagate globally.
  • 5KumoRFM's graph transformer captures long-range dependencies natively, applying the capability Peptides-func benchmarks to enterprise relational prediction.

15,535

Graphs

~151

Avg Nodes

10

Node Features

Multi-label

Task

What Peptides-func contains

Peptides-func is a dataset of 15,535 peptide molecular graphs from the Long Range Graph Benchmark (LRGB). Each peptide is represented as an atomic graph: individual atoms are nodes (10 features encoding atom type, degree, charge, etc.), and chemical bonds are edges. Graphs average 151 atoms and 307 bonds. The multi-label task predicts peptide functional classes (antimicrobial, antiviral, cell-penetrating, etc.).

The defining characteristic is that functional properties depend on the overall peptide structure, not just local motifs. An antimicrobial peptide's activity depends on the arrangement of hydrophobic and charged atoms across its full length. Information must propagate from one end of the chain to the other -- a distance of many hops in the atomic graph.

Why Peptides-func matters

Standard GNN benchmarks (Cora, MUTAG, ZINC) can be solved with 2-3 layers of message passing (2-3 hop neighborhoods). The LRGB authors showed that most GNN improvements on these benchmarks do not transfer to tasks requiring long-range information. Peptides-func exposes this limitation: a 2-layer GCN can only aggregate information from the nearest 2 amino acids, missing the global structural patterns that determine function.

Graph transformers (GPS, SAN) address this by adding global attention: each node can attend to every other node regardless of graph distance. On Peptides-func, GPS achieves ~65% average precision versus GCN's ~59%. The 6-point gap, while modest in absolute terms, represents a qualitative capability difference -- the ability to reason about distant graph relationships.

Loading Peptides-func in PyG

load_peptides_func.py
from torch_geometric.datasets import LRGBDataset

train_dataset = LRGBDataset(root='/tmp/LRGB', name='Peptides-func', split='train')
val_dataset = LRGBDataset(root='/tmp/LRGB', name='Peptides-func', split='val')
test_dataset = LRGBDataset(root='/tmp/LRGB', name='Peptides-func', split='test')

print(f"Train: {len(train_dataset)}")
graph = train_dataset[0]
print(f"Nodes: {graph.num_nodes}, Edges: {graph.num_edges}")
print(f"Labels: {graph.y.shape}")  # Multi-label binary vector

LRGB datasets use average precision (AP) as the primary metric for multi-label tasks.

Common tasks and benchmarks

Multi-label graph classification evaluated by average precision (AP). GCN: ~59.3%, GIN: ~59.8%, GAT: ~59.6%, GatedGCN: ~60.8%, GPS: ~65.4%, SAN: ~64.4%. The clear separation between standard GNNs (~59-61%) and graph transformers (~64-65%) validates the long-range hypothesis: global attention is necessary for tasks where information must travel many hops.

Example: long-range effects in business

Long-range dependencies are everywhere in enterprise data. A customer's purchase 6 months ago affects their churn risk today. A supplier disruption in Asia impacts manufacturing in Europe weeks later. A product return triggers a chain of events across customer service, inventory, and finance. These long-range relational effects cannot be captured by models that only see immediate neighbors. They require the global context that graph transformers provide.

Published benchmark results

Multi-label graph classification on Peptides-func. Metric is average precision (AP). Higher is better.

MethodAP (%)YearPaper
GCN59.32022Dwivedi et al.
GIN59.82022Dwivedi et al.
GAT59.62022Dwivedi et al.
GatedGCN60.82022Dwivedi et al.
SAN64.42022Dwivedi et al.
GPS65.42022Rampasek et al.
Exphormer~65.02023Shirzad et al.

Original Paper

Long Range Graph Benchmark

V. P. Dwivedi, L. Rampasek, M. Galkin, A. Parviz, G. Wolf, A. T. Luu, D. Beaini (2022). NeurIPS Datasets and Benchmarks Track

Read paper →

Original data source

The Peptides-func dataset is part of the Long Range Graph Benchmark (LRGB). The peptide structures come from the SATPdb database of therapeutic peptides. The benchmark is available from the LRGB GitHub repository.

cite_peptides_func.bib
@inproceedings{dwivedi2022long,
  title={Long Range Graph Benchmark},
  author={Dwivedi, Vijay Prakash and Rampasek, Ladislav and Galkin, Mikhail and Parviz, Ali and Wolf, Guy and Luu, Anh Tuan and Beaini, Dominique},
  booktitle={NeurIPS Datasets and Benchmarks Track},
  year={2022}
}

BibTeX citation for the LRGB benchmark (Peptides-func dataset).

Which dataset should I use?

Peptides-func vs Peptides-struct: Both are from LRGB. Peptides-func is multi-label classification (function). Peptides-struct is regression (structural properties). Func tests long-range classification; struct tests long-range regression.

Peptides-func vs ZINC: ZINC tests expressiveness on small molecules (~23 atoms) with local features. Peptides-func tests long-range capture on larger molecules (~151 atoms). If your model works on ZINC but fails on Peptides-func, it lacks long-range capability.

Peptides-func vs QM9: QM9 has small molecules where 3D geometry dominates. Peptides-func has large peptides where long-range graph topology dominates. Different bottlenecks, both molecular.

From benchmark to production

Production long-range graph reasoning operates on much larger graphs (millions of nodes) where global attention is computationally expensive. Efficient approximations (sparse attention, local-global hybrid attention, memory-efficient transformers) are required. The long-range capability must be maintained while scaling to production graph sizes.

Frequently asked questions

What is the Peptides-func dataset?

Peptides-func is a dataset of 15,535 peptide graphs from the Long Range Graph Benchmark (LRGB). Each peptide averages 151 amino acid nodes and 307 edges with 10 node features. The task is multi-label graph classification: predict which of 10 functional classes the peptide belongs to.

What makes Peptides-func a long-range benchmark?

Peptides are long molecular chains where functionally important sites can be far apart in the graph (many hops). Standard GNNs with 2-3 layers only see local neighborhoods. Peptides-func tests whether architectures can capture long-range dependencies across the full peptide chain.

How do I load Peptides-func in PyTorch Geometric?

Use `from torch_geometric.datasets import LRGBDataset; dataset = LRGBDataset(root='/tmp/LRGB', name='Peptides-func')`. The LRGB package provides standard train/val/test splits.

What models work best on Peptides-func?

Graph transformers (GPS, SAN) significantly outperform standard GNNs (GCN, GIN) because global attention captures long-range dependencies directly. GCN: ~59% AP, GPS: ~65% AP. The gap demonstrates why transformers matter for long-range tasks.

Why is the Long Range Graph Benchmark important?

LRGB (including Peptides-func) was created because standard benchmarks (Cora, MUTAG, ZINC) can be solved with 2-3 GNN layers. Real-world tasks often require information from distant parts of the graph. LRGB tests this capability that standard benchmarks miss.

Learn more about graph ML

PyTorch Geometric is the open-source foundation for graph neural networks. Explore more layers, concepts, and production patterns.