Abstract

In recent years, experts in theory of gesture have been showing some interest in automating the discovery of gesture information. Such an automation can help them in reducing the inherent subjectivity of gesture studies. Usually, to produce information for linguistic and psycholinguistic studies, the researchers analyze a video of people speaking and gesturing. This annotation task is costly and it is the goal of automation. Such videos compose the datasets that allow the development of automated models capable to carry out part of the analysis of gestures. In this paper, we present a detailed documentation about the Gesture Phase Segmentation Dataset, publicized in UCI Machine Learning Repository, and an extension of such dataset. Such dataset is especially prepared to be used in the development of models capable to carry out the segmentation of gestures in their phases. The extended dataset is composed by nine videos of three people gesturing and telling stories. The data was captured with Microsoft Kinect Sensor and they are represented by spatial coordinates and temporal information (velocity and acceleration). The data are labeled following four phase of gesture (preparation, stroke, hold and retraction) and rest positions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call