An overview of Graph Neural Networks up to Lagrangian Propagation GNNs
Seminar, Maasai Nice cote D'Azur, Nice Cote D'Azur, France
Several real-world applications are characterized by data that exhibit a complex structure that can be represented using graphs. The popularity of deep learning techniques renewed the interest in neural architectures able to process these patterns, inspired by the Graph Neural Network(GNN) model, proposed by SAILab. In this talk, I will present the original GNN model, recent evolutions (GCN, GATs, …) and our implementation framework in Tensorflow and PyTorch. The original GNNs encode the state of the nodes of the graph by means of an iterative diffusion procedure that, during the learning stage, must be computed at every epoch, until the fixed point of a learnable state transition function is reached, propagating the information among the neighbouring nodes. We propose a novel approach to learning in GNNs, based on constrained optimization in the Lagrangian framework, named Lagrangian Propagation Graph Neural Networks. Learning both the transition function and the node states is the outcome of a joint process, in which the state convergence procedure is implicitly expressed by a constraint satisfaction mechanism, avoiding iterative epoch-wise procedures and the network unfolding. Our computational structure searches for saddle points of the Lagrangian in the adjoint space composed of weights, nodes state variables and Lagrange multipliers. This process is further enhanced by multiple layers of constraints that accelerate the diffusion process. An experimental analysis shows that the proposed approach compares favourably with popular models on several benchmarks.