Abstract

This paper describes the simultaneous utilization of inertial and video sensing for the purpose of achieving human action detection and recognition in continuous action streams. Continuous action streams mean that actions of interest are performed randomly among actions of non-interest in a continuous manner. The inertial and video data are captured simultaneously via a wearable inertial sensor and a video camera, which are turned into 2D and 3D images. These images are then fed into a 2D and a 3D convolutional neural network with their decisions fused in order to detect and recognize a specified set of actions of interest from continuous action streams. The developed fusion approach is applied to two sets of actions of interest consisting of smart TV gestures and sports actions. The results obtained indicate the fusion approach is more effective than when each sensing modality is used individually. The average accuracy of the fusion approach is found to be 5.8% above inertial and 14.3% above video for the TV gesture actions of interest, and 23.2% above inertial and 1.9% above video for the sports actions of interest.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call