Lecture 3.2
Message Passing Framework
The standard algorithm for GNNs is Message Passing. We introduce this generic framework and discuss how to make it equivariant, distinguishing between "Invariant" and "Equivariant" approaches.
1. The Message Passing Framework
Originating from Gilmer et al. (2017), the framework consists of two main phases:
1. Message Computation: Calculate a message for each edge.
$$ m_{ij} = \phi_m(h_i, h_j, e_{ij}) $$2. Aggregation & Update: Sum messages and update the node state.
$$ h_i' = \phi_u(h_i, \sum_{j \in \mathcal{N}(i)} m_{ij}) $$2. Geometric Message Passing
When nodes have positions $x_i \in \mathbb{R}^3$, we can make the messages geometric. There are two main strategies:
- Invariant MPNNs (e.g., SchNet): Condition messages only on distances $\|x_j -
x_i\|$.
- Pros: Easy to implement using standard MLPs.
- Cons: Ignores directional information ("angulagrams" look the same).
- Equivariant Steerable MPNNs: Condition messages on relative position vectors $x_j -
x_i$.
- Pros: Uses full geometric information.
- Cons: Requires steerable features and equivariant operations (Clebsch-Gordan).
3. Convolution vs. Message Passing
We can view these methods as a spectrum:
- Linear Convolution: Message is a linear function of $h_j$ weighted by geometric kernel $K(x_j-x_i)$. $$ m_{ij} = K(x_j-x_i) h_j $$
- Non-Linear Convolution (MPNN): Message is a non-linear function of both $h_i$ and $h_j$. This allows for more expressive interactions (like attention or gating) while maintaining the same geometric principles.