Lagrangian Propagation Graph Neural Network

, Italy, University of Siena, SAILab, 2020

Lagrangian Propagation Graph Neural Network - LP-GNN

GNNs exploit a set of state variables, each assigned to a graph node, and a diffusion mechanism of the states among neighbor nodes, to implement an iterative procedure to compute the fixed point of the (learnable) state transition function. In this paper, we propose a novel approach to the state computation and the learning algorithm for GNNs, based on a constraint optimisation task solved in the Lagrangian framework. The state convergence procedure is implicitly expressed by the constraint satisfaction mechanism and does not require a separate iterative phase for each epoch of the learning procedure. In fact, the computational structure is based on the search for saddle points of the Lagrangian in the adjoint space composed of weights, neural outputs (node states), and Lagrange multipliers. The proposed approach is compared experimentally with other popular models for processing graphs.

Papers

  • A Lagrangian Approach to Information Propagation in Graph Neural Networks
  • Deep Lagrangian Constraint-based Propagation in Graph Neural Networks

The framework

The PyTorch code was written by Matteo Tiezzi (me, :) in August 2020.

I provide a github repo containing the implementation.