Berlin Tech Meetup: The Future of Relational Foundation Models, Systems, and Real-World Applications

Register now:
PyG/Layer8 min read

EdgeConv: Dynamic Graphs for Point Clouds and Beyond

EdgeConv recomputes the graph at every layer based on evolving features, making it the standard layer for point cloud processing. Instead of working on a fixed graph, it discovers the right neighborhood structure as the model learns.

PyTorch Geometric

TL;DR

  • 1EdgeConv builds a k-NN graph at each layer from current features, not fixed input structure. The graph evolves as the model learns, discovering semantic neighbors.
  • 2Edge features are [h_i, h_j - h_i]: absolute target features plus relative source-target difference. An MLP processes these to produce messages.
  • 3Standard layer for point cloud classification and segmentation (ModelNet40, ShapeNet). Captures local geometric patterns that PointNet misses.
  • 4The dynamic graph construction adds O(N*k*log(N)) cost per layer for k-NN computation. Worth it for point clouds; less common for fixed-structure graphs.

Original Paper

Dynamic Graph CNN for Learning on Point Clouds

Wang et al. (2018). ACM TOG 2019

Read paper →

What EdgeConv does

EdgeConv operates in three steps per layer:

  1. Build k-NN graph: Find k nearest neighbors for each point in the current feature space
  2. Compute edge features: For each edge (i, j), create [h_i, h_j - h_i] (absolute + relative)
  3. Apply MLP + max pool: Process edge features with MLP, then max-pool over neighbors

PyG implementation

edgeconv_model.py
import torch
import torch.nn.functional as F
from torch_geometric.nn import EdgeConv, global_max_pool
from torch_geometric.nn import knn_graph

class DGCNN(torch.nn.Module):
    def __init__(self, out_channels, k=20):
        super().__init__()
        self.k = k
        self.conv1 = EdgeConv(
            torch.nn.Sequential(
                torch.nn.Linear(2 * 3, 64),  # 2x input dim (concat)
                torch.nn.ReLU(),
                torch.nn.Linear(64, 64),
            ), aggr='max'
        )
        self.conv2 = EdgeConv(
            torch.nn.Sequential(
                torch.nn.Linear(2 * 64, 128),
                torch.nn.ReLU(),
                torch.nn.Linear(128, 128),
            ), aggr='max'
        )
        self.classifier = torch.nn.Linear(128, out_channels)

    def forward(self, x, batch):
        # Dynamic graph: recompute k-NN each layer
        edge_index = knn_graph(x, self.k, batch=batch)
        x = self.conv1(x, edge_index)

        edge_index = knn_graph(x, self.k, batch=batch)
        x = self.conv2(x, edge_index)

        x = global_max_pool(x, batch)
        return self.classifier(x)

# Point cloud: N points x 3 coordinates
model = DGCNN(out_channels=40, k=20)  # ModelNet40

knn_graph is called before each EdgeConv to rebuild the graph from current features. At layer 1, neighbors are spatially close. At deeper layers, they are semantically similar.

When to use EdgeConv

  • Point cloud classification. ModelNet40, ScanNet. EdgeConv is the standard baseline for 3D shape recognition.
  • Point cloud segmentation. ShapeNet part segmentation. Local geometric patterns captured by dynamic k-NN are essential for part boundaries.
  • Data without fixed graph structure. Any unstructured point set (LiDAR scans, particle physics events) where you need to discover the graph from data.

When not to use EdgeConv

  • Fixed-structure graphs. If your graph structure is given and meaningful (social networks, knowledge graphs), recomputing it each layer discards useful information. Use standard GNN layers.
  • Large graphs. k-NN computation is expensive for large point counts. For 100K+ points, use voxelization or radius-based neighbors.

Frequently asked questions

What is EdgeConv in PyTorch Geometric?

EdgeConv implements the Dynamic Graph CNN layer from Wang et al. (2018). It constructs a k-nearest neighbor graph in feature space at each layer, then applies an MLP to edge features (differences between connected node features). The graph is recomputed dynamically, adapting to the evolving feature space.

What makes EdgeConv 'dynamic'?

Unlike standard GNN layers that use a fixed graph, EdgeConv recomputes the k-NN graph at each layer based on current node features. This means the neighborhood structure evolves as the model learns, allowing the model to discover semantic neighbors (similar in feature space) not just spatial neighbors.

How does EdgeConv handle point clouds?

Point clouds have no inherent graph structure. EdgeConv creates one by computing k-nearest neighbors (typically k=20-40) in feature space. At layer 1, this is spatial proximity. At deeper layers, the graph reflects learned semantic similarity. This dynamic approach captures both local geometry and global semantics.

What is the edge feature in EdgeConv?

EdgeConv computes edge features as the concatenation of the target node's features and the difference between source and target: [h_i, h_j - h_i]. The difference captures relative position/features, while h_i provides absolute context. An MLP then processes these edge features.

When should I use EdgeConv vs PointNetConv?

Use EdgeConv when local structure matters for your 3D task (part segmentation, shape classification). Use PointNetConv when you want a simpler baseline or when the task depends more on global point cloud properties. EdgeConv captures local geometric patterns that PointNet misses.

Learn more about graph ML

PyTorch Geometric is the open-source foundation for graph neural networks. Explore more layers, concepts, and production patterns.