The Grenoble System for the Social Touch Challenge at ICMI 2015Report as inadecuate




The Grenoble System for the Social Touch Challenge at ICMI 2015 - Download this document for free, or read online. Document in PDF available to download.

1 LIG - Laboratoire d-Informatique de Grenoble 2 Speech Communication MICA - International Research Institute MICA 3 PRIMA - Perception, recognition and integration for observation of activity Inria Grenoble - Rhône-Alpes, UJF - Université Joseph Fourier - Grenoble 1, INPG - Institut National Polytechnique de Grenoble , CNRS - Centre National de la Recherche Scientifique : UMR5217

Abstract : New technologies and especially robotics is going towards more natural user interfaces. Works have been done in different modality of interaction such as sight visual computing, and audio speech and audio recognition but some other modalities are still less researched. The touch modality is one of the less studied in HRI but could be valuable for naturalistic interaction. However touch signals can vary in semantics. It is therefore necessary to be able to recognize touch gestures in order to make human-robot interaction even more natural.We propose a method to recognize touch gestures. This method was developed on the CoST corpus and then directly applied on the HAART dataset as a participation of the Social Touch Challenge at ICMI 2015.Our touch gesture recognition process is detailed in this article to make it reproducible by other research teams.Besides features set description, we manually filtered the training corpus to produce 2 datasets.For the challenge, we submitted 6 different systems.A Support Vector Machine and a Random Forest classifiers for the HAART dataset.For the CoST dataset, the same classifiers are tested in two conditions: using all or filtered training datasets.As reported by organizers, our systems have the best correct rate in this year-s challenge 70.91% on HAART, 61.34% on CoST.Our performances are slightly better that other participants but stay under previous reported state-of-the-art results.

Keywords : Multimodal perception gesture recognition touch challenge





Author: Viet Cuong Ta - Wafa Johal - Maxime Portaz - Eric Castelli - Dominique Vaufreydaz -

Source: https://hal.archives-ouvertes.fr/



DOWNLOAD PDF




Related documents