Abstract

Continuous and real-time gesture spotting is a key factor in the development of novel human–machine interaction modalities. Gesture recognition can be greatly improved with previous reliable segmentation. This paper introduces a new unsupervised threshold-based hand/arm gesture segmentation method to accurately divide continuous data streams into dynamic and static segments from unsegmented and unbounded input data. This segmentation may reduce the number of wrongly classified gestures in real-world conditions. The proposed approach identifies sudden inversions of movement direction, which are a cause of oversegmentation (excessive segmentation). This is achieved by the analysis of velocities and accelerations numerically derived from positional data. A genetic algorithm is used to compute feasible thresholds from calibration data. Experimental tests with three different subjects demonstrated an average oversegmentation error of 2.70% in a benchmark for motion segmentation with a feasible sliding window size.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call