Abstract

The touch gesture is one of the most essential and effective means to transfer affective feelings and intents in humans’ communication. For an intelligent agent or a robot, the ability to automatically detect and recognize human touch can realize efficient and natural human–robot interaction. To this end, a novel spatiotemporal fusion feature extraction method is proposed for touch gesture classification tasks. The proposed method extracts time-frequency features from wavelet coefficients based on discrete wavelet transforms. Then, the feature array of space and frequency bands is constructed to extract the spatiotemporal fusion features. A publicly available touch gesture dataset called CoST is used to perform the touch gesture recognition. The recognition result of 14 gesture classes using a user-independent model yields an accuracy of up to 64.17%. Experimental results show that this method outperforms the state-of-the-art ones and that the spatiotemporal fusion features effectively boost the performance of touch gesture recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call