Home People Cenedese Research Gesture Recognition & Classification













Timeseries Analysis, Gesture Recognition and Classification



Angelo Cenedese – Assistant Professor at the Department of Information Engineering - University of Padova


Gian Antonio Susto – Research Fellow at the Department of Information Engineering - University of Padova



Nowadays, pervasive networking of sensors and actuators has definitely changed our way of interacting with the environment, thanks to the advances in technology and novel paradigms in distributed system theory as well as in information and coding theory: indeed these devices can offer access to an unprecedented quality and quantity of information that can revolutionize our ability in controlling the human space.


Gesture recognition

Within the context of Home Automation, the design of man-machine interfaces have assumed a central role for the development of smart environments. In this respect, the interaction based on gestures measured through inertial devices represents a fascinating and interesting solution thanks to a new generation of ubiquitous technologies that allow to pervasively and seamlessly control the human space.

This research line regards a Machine Learning (ML) approach to gesture recognition (GR), in its main aspects of (i) event identification, (ii) feature extraction and (iii) classification: in detail, an informative and compact representation of the gesture input signals is defined, using both feature extraction and the analysis in the time domain through signal warping, a pre-processing phase based on Principal Component Analysis is proposed to increase the performance in real-world scenario conditions, and, finally, parsimonious classification techniques based on Sparse Bayesian Learning are designed and compared with more classical ML algorithms.

These contributions yield the definition of a system that is user independent, device independent, device orientation independent, and provides a high classification accuracy.

Two datasets are available for this application:

  • Dataset A: Control Dataset, where for each gesture type 30 instances are repeated by a single person with the same device and device orientation, varying the way the gesture is performed (wide/narrow/deformed movements);
  • Dataset B: Real-scenario Dataset, were more than 30 people are asked to perform a set of gestures (for a grand total of 530 gesture instances); two different devices are used alternatively during the acquisition, and each user is free to choose device orientation, gesture speed, direction, amplitude and starting point of the movement.
  • Dataset C: Extended gesture class Dataset, where an extended set of 8 gesture classes has been tested, which corresponds to 20 different movements (taking into account the different manners to perform the gestures by the user).