Abstract

Existing gesture classification techniques assign a categorical label to each gesture instance and learn to recognize only a predetermined set of gesture classes. These techniques lack adaptability to new or unseen gestures which is the premise of zero shot learning (ZSL). Hence we propose to identify the properties of gestures and thereby infer the categorical label instead of recognizing the class label directly. ZSL for gesture recognition has hardly been studied in the pattern recognition research. The reason is partly due to the lack of benchmarks and specialized datasets consisting of annotations for gesture attributes. In this regard, this paper presents the first annotated database of attributes for the gestures present in ChaLearn 2013 and MSRC - 12 datasets. This was achieved as follows; First, we identified a finite set of 64 discriminative and representative high level attributes of gestures from the literature. Further, we performed crowdsourced human studies using Amazon Mechanical Turk to obtain attribute annotations for 28 gesture classes. Next, we used our dataset to train existing ZSL classifiers to predict attribute labels. Finally, we provide benchmarks for unseen gesture class prediction on CGD2013 and MSRC-12. We have made this dataset publicly available to encourage researchers to further investigate this problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call