Lagrangian Propagation Graph Neural Networks

Published in AAAI20 - DLGMA Workshop paper, 2020

Recommended citation: Matteo Tiezzi, Giuseppe Marra, Stefano Melacci, Marco Maggini and Marco Gori (2020). "Lagrangian Propagation Graph Neural Networks " AAAI20 - DLGMA Workshop https://deep-learning-graphs.bitbucket.io/dlg-aaai20/

In the last years, the popularity of deep learning techniques has renewed the interest in neural models able to process complex patterns, that are naturally encoded as graphs. In particular, different architectures have been proposed to extend the original Graph Neural Network (GNN) model. GNNs exploit a set of state variables, each assigned to a graph node, and a diffusion mechanism among neighbor nodes, to implement an iterative state update procedure that computes the fixed point of the (learnable) state transition function. In this paper, we propose a novel approach to state computation and learning for GNNs, based on a constraint optimisation task solved in the Lagrangian framework. The state convergence procedure is implicitly expressed by the constraint satisfaction mechanism and does not require a separate iterative phase for each epoch of the learning procedure. In fact, the computational structure is based on the search for saddle points of the Lagrangian in the adjoint space of weights, neural outputs (node states), and Lagrange multipliers. The proposed approach is compared experimentally with other popular models for processing graphs.

Download paper here

Recommended citation:

@article{tiezzi2020lagrangian,
  title={Lagrangian Propagation Graph Neural Networks},
  author={Tiezzi, Matteo and Marra, Giuseppe and Melacci, Stefano and Maggini, Marco and Gori, Marco},
  year={2020}
}