Abstract

In this study, the authors propose a novel and robust approach to control auxiliary tasks in vehicles using hand gestures. First, they create a three-dimensional video volume by appending one frame to other that captures the motion history of frames. Then, they extract features using histogram of oriented gradients on each video volume. These features are represented in the form of subspaces on Grassmann manifold. To improve the recognition accuracy, they map the data from one manifold to another manifold with the help of a Grassmann kernel. Grassmann graph embedding discriminant analysis framework is used to classify the gestures. They perform experiments on two datasets: LISA and Cambridge Hand Gesture in three different testing methods such as 1/3-subject, 2/3-subject and cross-subject. Experimental results show that their proposed model outperforms and is comparable with the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call