Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

portfolio

publications

Inductive–Transductive Learning with Graph Neural Networks

Published in 8th IAPR TC3 Workshop, ANNPR 2018, 2018

8th IAPR TC3 Workshop, ANNPR 2018 - Inductive–Transductive Learning GNNs

Recommended citation: Rossi, A., Tiezzi, M., Dimitri, G. M., Bianchini, M., Maggini, M., & Scarselli, F. (2018, September). "Inductive–transductive learning with graph neural networks. " IAPR Workshop on Artificial Neural Networks in Pattern Recognition (pp. 201-212). Springer, Cham. 8th IAPR TC3 Workshop, ANNPR 2018 https://link.springer.com/chapter/10.1007%2F978-3-319-99978-4_16

Video Surveillance of Highway Traffic Events by Deep Learning Architectures

Published in 27th International Conference on Artificial Neural Networks, Rhodes, Greece, 2018

27th International Conference on Artificial Neural Networks, ICANN18 - Video Surveillance of Highway Traffic Events by Deep Learning Architectures

Recommended citation: Tiezzi, M., Melacci, S., Maggini, M., & Frosini, A. (2018, October). "Video Surveillance of Highway Traffic Events by Deep Learning Architectures. " International Conference on Artificial Neural Networks (pp. 584-593). Springer, Cham. 27th International Conference on Artificial Neural Networks, ICANN18 https://link.springer.com/chapter/10.1007%2F978-3-030-01424-7_57

software

The Graph Neural Network Framework

, Italy, University of Siena, SAILab, 2018

The Graph Neural Network (GNN) is a connectionist model particularly suited for problems whose domain can be represented by a set of patterns and relationships between them.

SAILenv: Learning in Virtual VisualEnvironments Made Simple

, Italy, University of Siena, SAILab, 2020

SAILenv is a Virtual Environment powered by Unity3D. It includes 3 pre-built scenes with full pixel-wise annotations. SAILenv is capable of generating frames at real-time speed, complete with pixel-wise annotations, optical flow and depth.

talks

LabMeeting: Capsule Networks

Published:

Convolutional neural networks that are used to recognize shapes typically use one or more layers of learned feature detectors that produce scalar outputs, interleaved with subsampling.

LabMeeting: Excursus in the State of the art of object detection - Deep learning models, performance, practical tests

Published:

In image classification an image with a single object is the focus and the task is to say what it contains. But when we look at the world around us, we carry out far more complex tasks. There are multiple overlapping objects, different backgrounds and we  not only classify these different objects but also identify their boundaries, differences, and relations among them. This task fall under the name of object detection and instance segmentation.

LabMeeting: Adversarial Reprogramming of Neural Networks

Published:

Adversarial examples are defined as “inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake”. Indeed, in the computer vision scenario it has been shown that well-crafted perturbation to input images can induce classification errors, such as confusing a cat with a computer. In general, adversarial attacks are designed to degrade the performances of models or to prompt the prediction of specific output classes. In this recent work, the authors introduce a framework in which the goal of adversarial attacks is to reprogram the target model to perform a completely new task. This is accomplished by optimizing for a single adversarial perturbation that can be added to all test-time inputs. In such a way, the target model performs a task chosen by the adversary when processing these inputs—even if it was not trained on this task.

ACDL Satellite Workshop on Graph Neural Networks - Lagrangian Propagation Graph Neural Networks – A constraint-based formulation

Published:

GNNs exploit a set of state variables, each assigned to a graph node, and a diffusion mechanism of the states among neighbor nodes, to implement an iterative procedure to compute the fixed point of the (learnable) state transition function. We propose a novel approach to the state computation and the learning algorithm for GNNs, based on a constraint optimization task solved in the Lagrangian framework. The state convergence procedure is implicitly expressed by the constraint satisfaction mechanism and does not require a separate iterative phase for each epoch of the learning procedure. In fact, the computational structure is based on the search for saddle points of the Lagrangian in the adjoint space composed of weights, neural outputs (node states), and Lagrange multipliers.

LabMeeting: Knowledge graphs for image classification and semantic navigation – from reality to virtual worlds

Published:

One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world. Humans can learn about the characteristics of objects and the relationships that occur between them to learn a large variety of visual concepts, often with few examples.