Abstract

This paper presents the simultaneous utilization of video images and inertial signals that are captured at the same time via a video camera and a wearable inertial sensor within a fusion framework in order to achieve a more robust human action recognition compared to the situations when each sensing modality is used individually. The data captured by these sensors are turned into 3D video images and 2D inertial images that are then fed as inputs into a 3D convolutional neural network and a 2D convolutional neural network, respectively, for recognizing actions. Two types of fusion are considered—Decision-level fusion and feature-level fusion. Experiments are conducted using the publicly available dataset UTD-MHAD in which simultaneous video images and inertial signals are captured for a total of 27 actions. The results obtained indicate that both the decision-level and feature-level fusion approaches generate higher recognition accuracies compared to the approaches when each sensing modality is used individually. The highest accuracy of 95.6% is obtained for the decision-level fusion approach.

Highlights

  • Human action recognition has been extensively studied in the literature and has already been incorporated into commercial products

  • The use of deep learning models or deep neural networks have proven to be more effective than conventional approaches for human action recognition

  • Paper, the simultaneous utilization video and inertial sensing were considered within a fusion to achievetohuman recognition based onbased deep modalities were considered withinframework a fusion framework achieveaction human action recognition learning models

Read more

Summary

Introduction

Human action recognition has been extensively studied in the literature and has already been incorporated into commercial products. The use of deep learning models or deep neural networks have proven to be more effective than conventional approaches for human action recognition. In [15], it was shown that deep learning networks utilizing video images performed better than the previous conventional approaches. Depth cameras have been utilized for human action recognition, e.g., [2,11]. The use of these cameras has been limited to indoor environments as they rely on infrared light for obtaining depth images

Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.