Abstract
This paper describes the simultaneous utilization of inertial and video sensing for the purpose of achieving human action detection and recognition in continuous action streams. Continuous action streams mean that actions of interest are performed randomly among actions of non-interest in a continuous manner. The inertial and video data are captured simultaneously via a wearable inertial sensor and a video camera, which are turned into 2D and 3D images. These images are then fed into a 2D and a 3D convolutional neural network with their decisions fused in order to detect and recognize a specified set of actions of interest from continuous action streams. The developed fusion approach is applied to two sets of actions of interest consisting of smart TV gestures and sports actions. The results obtained indicate the fusion approach is more effective than when each sensing modality is used individually. The average accuracy of the fusion approach is found to be 5.8% above inertial and 14.3% above video for the TV gesture actions of interest, and 23.2% above inertial and 1.9% above video for the sports actions of interest.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.