Abstract

Touch gesture recognition (TGR) plays a pivotal role in many applications, such as socially assistive robots and embodied telecommunication. However, one obstacle to practicality of existing TGR methods is the individual disparities across subjects. Moreover, a deep neural network trained with multiple existing subjects can easily lead to overfitting for a new subject. Hence, how to mitigate the discrepancies between the new and existing subjects and establish a generalized network for TGR is a significant task to realize reliable human–robot tactile interaction. In this article, a novel framework for Multisource domain Adaptation via Shared-Specific feature projection (MASS) is proposed, which incorporates intradomain discriminant, multidomain discriminant, and cross-domain consistency into a deep learning network for cross-subject TGR. Specifically, the MASS method first extracts the shared features in the common feature space of training subjects, with which a domain-general classifier is built. Then, the specific features of each pair of training and testing subjects are mapped and aligned in their common feature space, and multiple domain-specific classifiers are trained with the specific features. Finally, the domain-general classifier and domain-specific classifiers are ensembled to predict the label for the touch samples of a new subject. Experimental results performed on two datasets show that our proposed MASS method achieves remarkable results for cross-subject TGR. The code of MASS is available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/AI-touch/MASS</uri> .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call