Overview of the Graph Neural Network model - GNN

_images/intro.gif

The Graph Neural Network (GNN) [SGT+09b] is a connectionist model particularly suited for problems whose domain can be represented by a set of patterns and relationships between them.

In those problems, a prediction about a given pattern can be carried out exploiting all the related information, which includes the pattern features, the pattern relationships and, in general, the whole graph that represents the domain. GNN peculiarity consists in its capability of computing the output prediction processing directly the input domain graph, without any preprocessing into a vectorial representation.

GNNs have been proved to be a universal approximator for a class of functions on graphs and have been applied to several tasks, including spam detection, object localization in images, molecule classification.

PyTorch

We provide a novel implementation which exploits the PyTorch framework. We simplified the GNN creation, and provide several tools for input creation (including some utilities and examples on using Deep Graph Library (DGL) input data).

Tensorflow

The GNN was originally implemented in MATLAB but nowadays frameworks such as Tensorflow are more popular in the machine learning community.

The proposed implementation is modular and, in particular, consists of two components: a core module implementing the GNN model, and a module for the definition of the loss function, the metric function, and the sub–networks that are applied at node level in the GNN computation.

This latter module can be customized by the user, in order to address specific tasks and to implement extensions to the basic model.

The proposed implementation of GNNs, based on tensor algebra, is particularly efficient and easily parallelizable on modern multi-CPU and multi–GPU hardware architectures.

_images/soc.png

Free software

GNN is free software; you can redistribute it and/or modify it under the terms of the 3-clause BSD License. We welcome contributions. Join us on GitHub.

History

The original MATLAB version was designed and written by Franco Scarselli and Gabriele Monfardini in 2011.

The PyTorch porting was written by Matteo Tiezzi, PhD Student at the SAILab, in May 2020.

The Tensorflow porting was written by Alberto Rossi and Matteo Tiezzi, PhD Students at the SAILab, in January 2018.

Indices and tables