Berlin Tech Meetup: The Future of Relational Foundation Models, Systems, and Real-World Applications

Register now:
PyG/Layer8 min read

LGConv: The Simplest GNN That Beats Matrix Factorization

LightGCN strips GCN to its bare minimum for recommendations: no weight matrices, no activation functions, just neighborhood aggregation on the user-item interaction graph. It is simpler than GCN and outperforms it on collaborative filtering benchmarks.

PyTorch Geometric

TL;DR

  • 1LGConv removes weight matrices and nonlinear activations from GCN, keeping only neighborhood aggregation. For ID-based collaborative filtering, these components hurt rather than help.
  • 2Final embeddings are a weighted sum across all layers: E_final = (E^0 + E^1 + ... + E^K) / (K+1). This captures both direct and multi-hop user-item relationships.
  • 3Consistently outperforms matrix factorization and full GCN on recommendation benchmarks (Gowalla, Yelp2018, Amazon-Book).
  • 4Designed for pure collaborative filtering with ID embeddings. For side information (features), combine with feature-based models or use full GCN/GAT.

Original Paper

LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation

He et al. (2020). SIGIR 2020

Read paper →

What LGConv does

LightGCN performs the simplest possible graph convolution:

  1. Initialize user and item embeddings (learnable ID lookup)
  2. At each layer, aggregate neighbor embeddings with degree normalization (no weight matrix, no activation)
  3. Combine embeddings from all layers via weighted average
  4. Score user-item pairs via dot product of combined embeddings

The math (simplified)

LGConv formula
# Layer-wise propagation (no W, no activation)
E^(k+1)_u = Σ_{i in N(u)} (1 / sqrt(|N(u)| * |N(i)|)) · E^(k)_i
E^(k+1)_i = Σ_{u in N(i)} (1 / sqrt(|N(i)| * |N(u)|)) · E^(k)_u

# Layer combination
E_u = (1/(K+1)) · (E^(0)_u + E^(1)_u + ... + E^(K)_u)
E_i = (1/(K+1)) · (E^(0)_i + E^(1)_i + ... + E^(K)_i)

# Prediction
score(u, i) = E_u^T · E_i

Where:
  E^(0)_u, E^(0)_i = initial learnable embeddings
  K = number of layers (typically 2-3)

Pure aggregation without transformation. The layer combination captures both direct co-interaction (layer 0) and multi-hop similarity (layers 1-K).

PyG implementation

lightgcn_model.py
import torch
from torch_geometric.nn import LGConv

class LightGCN(torch.nn.Module):
    def __init__(self, num_users, num_items, embedding_dim, num_layers=3):
        super().__init__()
        self.num_users = num_users
        self.embedding = torch.nn.Embedding(num_users + num_items, embedding_dim)
        self.convs = torch.nn.ModuleList([LGConv() for _ in range(num_layers)])
        self.num_layers = num_layers
        torch.nn.init.xavier_uniform_(self.embedding.weight)

    def forward(self, edge_index):
        x = self.embedding.weight
        xs = [x]
        for conv in self.convs:
            x = conv(x, edge_index)
            xs.append(x)
        # Layer combination: average all layers
        x = torch.stack(xs, dim=0).mean(dim=0)
        user_emb = x[:self.num_users]
        item_emb = x[self.num_users:]
        return user_emb, item_emb

    def predict(self, user_emb, item_emb, user_ids, item_ids):
        return (user_emb[user_ids] * item_emb[item_ids]).sum(dim=-1)

model = LightGCN(num_users=10000, num_items=50000,
                 embedding_dim=64, num_layers=3)

LGConv takes no parameters itself (no weight matrix). All learnable parameters are in the initial embedding table. The model trains via BPR loss on positive/negative interaction pairs.

When to use LGConv

  • Collaborative filtering. Pure user-item interaction data without side features. LightGCN is the state-of-the-art baseline.
  • When simplicity matters. Fewer hyperparameters, faster training, easier to debug. A strong default choice for recommendation system prototypes.
  • When you only have interaction data. No user demographics, no item descriptions. Just who clicked/bought/rated what.

When not to use LGConv

  • When you have rich features. User demographics, item descriptions, contextual features. LGConv cannot use them. Use GCNConv or GATConv.
  • Cold-start users/items. New entities with no interactions have no embedding. LGConv cannot generate embeddings for unseen users. Use SAGEConv for inductive settings.

Frequently asked questions

What is LGConv in PyTorch Geometric?

LGConv implements the LightGCN layer from He et al. (2020). It is a simplified GCN designed specifically for collaborative filtering (recommendations). It removes feature transformation and nonlinear activation, keeping only neighborhood aggregation. The final embedding is a weighted sum of embeddings from all layers.

Why does LightGCN remove feature transformation?

In collaborative filtering, node features are learnable ID embeddings, not pre-computed features. Feature transformation (weight matrices) and nonlinear activation add no benefit to ID embeddings and can even hurt performance by introducing unnecessary complexity. LightGCN shows that pure neighborhood aggregation is what matters for recommendations.

How does LightGCN combine multi-layer embeddings?

LightGCN uses a weighted sum of embeddings from all layers (including the initial embedding): final = alpha_0*E^0 + alpha_1*E^1 + ... + alpha_K*E^K. Typically, equal weights (1/(K+1)) work well. This layer combination captures both local and broader neighborhood structure.

When should I use LGConv vs matrix factorization?

Use LGConv when your user-item interaction graph has meaningful multi-hop structure (users who liked the same items have similar tastes). Use matrix factorization when the graph is very sparse or when you only need direct user-item signals. LightGCN consistently outperforms matrix factorization on standard recommendation benchmarks.

Can LGConv handle side information (user/item features)?

LGConv is designed for pure collaborative filtering with ID embeddings only. If you have side information (user demographics, item descriptions), consider combining LGConv embeddings with feature-based models, or use GCNConv/GATConv which naturally handle node features.

Learn more about graph ML

PyTorch Geometric is the open-source foundation for graph neural networks. Explore more layers, concepts, and production patterns.