Abstract

This article covers a deep learning-based decision fusion approach for action or gesture recognition via simultaneous utilization of a depth camera and a wearable inertial sensor. The deep learning approach involves using a convolutional neural network (CNN) for depth images captured by a depth camera and a combination of CNN and long short–term memory network for inertial signals captured by a wearable inertial sensor, followed by a decision-level fusion. Due to the limited size of the training data, a data augmentation procedure is carried out by generating depth images corresponding to different orientations of the depth camera and by generating inertial signals corresponding to different orientations of the inertial sensor placement on the body. The results obtained indicate the positive impact of the decision-level fusion as well as the data augmentation on the recognition accuracies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call