Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in 8th IAPR TC3 Workshop, ANNPR 2018, 2018
8th IAPR TC3 Workshop, ANNPR 2018 - Inductive–Transductive Learning GNNs
Recommended citation: Rossi, A., Tiezzi, M., Dimitri, G. M., Bianchini, M., Maggini, M., & Scarselli, F. (2018, September). "Inductive–transductive learning with graph neural networks. " IAPR Workshop on Artificial Neural Networks in Pattern Recognition (pp. 201-212). Springer, Cham. 8th IAPR TC3 Workshop, ANNPR 2018 https://link.springer.com/chapter/10.1007%2F978-3-319-99978-4_16
Published in 27th International Conference on Artificial Neural Networks, Rhodes, Greece, 2018
27th International Conference on Artificial Neural Networks, ICANN18 - Video Surveillance of Highway Traffic Events by Deep Learning Architectures
Recommended citation: Tiezzi, M., Melacci, S., Maggini, M., & Frosini, A. (2018, October). "Video Surveillance of Highway Traffic Events by Deep Learning Architectures. " International Conference on Artificial Neural Networks (pp. 584-593). Springer, Cham. 27th International Conference on Artificial Neural Networks, ICANN18 https://link.springer.com/chapter/10.1007%2F978-3-030-01424-7_57
Published in AAAI20 - DLGMA Workshop paper, 2020
AAAI20 - DLGMA Workshop paper - LPGNN
Recommended citation: Matteo Tiezzi, Giuseppe Marra, Stefano Melacci, Marco Maggini and Marco Gori (2020). "Lagrangian Propagation Graph Neural Networks " AAAI20 - DLGMA Workshop https://deep-learning-graphs.bitbucket.io/dlg-aaai20/
Published in IJCNN2020 - Accepted for publishing, 2020
IJCNN2020 - Local Propagation
Recommended citation: Giuseppe Marra, Matteo Tiezzi, Stefano Melacci, Alessandro Betti, Marco Maggini and Marco Gori (2020). "Local Propagation in Constraint-based Neural Network " IJCNN2020 https://ieeexplore.ieee.org/document/9207043
Published in ECAI2020, 2020
ECAI2020 - LPGNN
Recommended citation: Matteo Tiezzi, Giuseppe Marra, Stefano Melacci, Marco Maggini and Marco Gori (2020). "A Lagrangian Approach to Information Propagation in Graph Neural Networks; ECAI2020 - http://ebooks.iospress.nl/publication/55057
Published in ICML20 - GRL+ Workshop paper, 2020
ICML20 - GRL+ Workshop paper - DeepLPGNN
Recommended citation: Matteo Tiezzi, Giuseppe Marra, Stefano Melacci and Marco Maggini (2020). "Deep Lagrangian Propagation in Graph Neural Networks " ICML20 - GRL+ Workshop https://grlplus.github.io/papers/51.pdf
Published in NeurIPS2020, 2020
NeurIPS2020 - Focus
Recommended citation: Matteo Tiezzi, Stefano Melacci, Alessandro Betti, Marco Maggini, Marco Gori (2020). "Focus of Attention Improves Information Transfer in Visual Features " NeurIPS2020 https://papers.nips.cc/paper/2020/hash/fc2dc7d20994a777cfd5e6de734fe254-Abstract.html
Published in TPAMI, 2021
TPAMI - Deep LP-GNNs
Recommended citation: Matteo Tiezzi, Giuseppe Marra, Stefano Melacci, Marco Maggini (2021). "Deep Constraint-based Propagation in Graph Neural Networks " TPAMI https://doi.org/10.1109/TPAMI.2021.3073504
Published in IJCNN, 2021
IJCNN - Friendly Training
Recommended citation: Simone Marullo, Matteo Tiezzi, Marco Gori, Stefano Melacci (2021). "Friendly Training: Neural Networks Can Adapt Data To Make Learning Easier " IJCNN https://doi.org/10.1109/IJCNN52387.2021.9534165
Published in CSSL@IJCAI2021, 2021
CSSL@IJCAI2021
Recommended citation: Enrico Meloni, Alessandro Betti, Lapo Faggi, Simone Marullo, Matteo Tiezzi, Stefano Melacci (2021). "Evaluating Continual Learning Algorithms by Generating 3D Virtual Environments " CSSL@IJCAI2021 https://arxiv.org/abs/2109.07855
Published in ICMLA2022, 2022
ICMLA2022
Recommended citation: Enrico Meloni, Matteo Tiezzi, Luca Pasqualini, Marco Gori, Stefano Melacci (2021). "Messing Up 3D Virtual Environments: Transferable Adversarial 3D Objects " ICMLA https://doi.org/10.1109/ICMLA52953.2021.00009
Published in AAAI2022, 2022
AAAI2022
Recommended citation: Simone Marullo, Matteo Tiezzi, Marco Gori, Stefano Melacci (2022). "Being Friends Instead of Adversaries: Deep Networks Learn from Data Simplified by Other Networks " AAAI https://arxiv.org/abs/2112.09968
Published in IJCAI2022, 2022
IJCAI2022
Recommended citation: Matteo Tiezzi, Simone Marullo, Lapo Faggi, Enrico Meloni, Alessandro Betti, Stefano Melacci (2022). "Stochastic Coherence Over Attention Trajectory For Continuous Learning In Video Streams " IJCAI https://arxiv.org/abs/2204.12193
, Italy, University of Siena, SAILab, 2018
The Graph Neural Network (GNN) is a connectionist model particularly suited for problems whose domain can be represented by a set of patterns and relationships between them.
, Italy, University of Siena, SAILab, 2020
SAILenv is a Virtual Environment powered by Unity3D. It includes 3 pre-built scenes with full pixel-wise annotations. SAILenv is capable of generating frames at real-time speed, complete with pixel-wise annotations, optical flow and depth.
, Italy, University of Siena, SAILab, 2020
The Graph Neural Network (GNN) is a connectionist model particularly suited for problems whose domain can be represented by a set of patterns and relationships between them.
, Italy, University of Siena, SAILab, 2020
Lagrangian Propagation Graph Neural Network - LP-GNN
Published:
Convolutional neural networks that are used to recognize shapes typically use one or more layers of learned feature detectors that produce scalar outputs, interleaved with subsampling.
Published:
In image classification an image with a single object is the focus and the task is to say what it contains. But when we look at the world around us, we carry out far more complex tasks. There are multiple overlapping objects, different backgrounds and we not only classify these different objects but also identify their boundaries, differences, and relations among them. This task fall under the name of object detection and instance segmentation.
Published:
Adversarial examples are defined as “inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake”. Indeed, in the computer vision scenario it has been shown that well-crafted perturbation to input images can induce classification errors, such as confusing a cat with a computer. In general, adversarial attacks are designed to degrade the performances of models or to prompt the prediction of specific output classes. In this recent work, the authors introduce a framework in which the goal of adversarial attacks is to reprogram the target model to perform a completely new task. This is accomplished by optimizing for a single adversarial perturbation that can be added to all test-time inputs. In such a way, the target model performs a task chosen by the adversary when processing these inputs—even if it was not trained on this task.
Published:
The ability to decompose visual scenes in terms of abstract building blocks is crucial for general intelligence. These basic building blocks are capable to represent meaningful properties, interactions and other relations across scenes.
Published:
GNNs exploit a set of state variables, each assigned to a graph node, and a diffusion mechanism of the states among neighbor nodes, to implement an iterative procedure to compute the fixed point of the (learnable) state transition function. We propose a novel approach to the state computation and the learning algorithm for GNNs, based on a constraint optimization task solved in the Lagrangian framework. The state convergence procedure is implicitly expressed by the constraint satisfaction mechanism and does not require a separate iterative phase for each epoch of the learning procedure. In fact, the computational structure is based on the search for saddle points of the Lagrangian in the adjoint space composed of weights, neural outputs (node states), and Lagrange multipliers.
Published:
One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world. Humans can learn about the characteristics of objects and the relationships that occur between them to learn a large variety of visual concepts, often with few examples.
Published:
My spotlight presentation for the Lagrangian Propagation Graph Neural Network paper at the AAAI20-DLGMA Workshop.
Published:
On Mutual Information Maximization for Representation Learning - An excursus into Deep InfoMax, its variants and their lacks
Published:
Deep Lagrangian Propagation in Graph Neural Networks
Published:
Several real-world applications are characterized by data that exhibit a complex structure that can be represented using graphs. The popularity of deep learning techniques renewed the interest in neural architectures able to process these patterns, inspired by the Graph Neural Network(GNN) model, proposed by SAILab. In this talk, I will present the original GNN model, recent evolutions (GCN, GATs, …) and our implementation framework in Tensorflow and PyTorch. The original GNNs encode the state of the nodes of the graph by means of an iterative diffusion procedure that, during the learning stage, must be computed at every epoch, until the fixed point of a learnable state transition function is reached, propagating the information among the neighbouring nodes. We propose a novel approach to learning in GNNs, based on constrained optimization in the Lagrangian framework, named Lagrangian Propagation Graph Neural Networks. Learning both the transition function and the node states is the outcome of a joint process, in which the state convergence procedure is implicitly expressed by a constraint satisfaction mechanism, avoiding iterative epoch-wise procedures and the network unfolding. Our computational structure searches for saddle points of the Lagrangian in the adjoint space composed of weights, nodes state variables and Lagrange multipliers. This process is further enhanced by multiple layers of constraints that accelerate the diffusion process. An experimental analysis shows that the proposed approach compares favourably with popular models on several benchmarks.