Inertial sensors have become increasingly popular in human activity classification due to their ease of use and affordability. This paper proposes a novel algorithm for human activity recognition that is a combination of a high-gain observer and deep learning computer vision classification algorithms. The nonlinear high-gain observer designed using Lyapunov analysis accurately estimates the attitude of the chest of a human subject using measurements from a single Inertial Measurement Unit (IMU). The signals processed by the observer are then converted into spectrograms to obtain “images” of the frequency response of the signals. The images for activities from a dataset of 7 human subjects are annotated and used for training/ fine-tuning of several well-known deep learning algorithms for image processing. The results from the best combination of our algorithms shows an exceptional accuracy of 98% for activity recognition. Using deep learning computer vision algorithms, this paper shows how to perform transfer learning from networks pre-trained on millions of images, thus showing how we can train a powerful deep learning network for activity recognition even with just small datasets. The algorithm that uses the high gain observer is shown to perform significantly better than an algorithm based on raw accelerometer and gyro signals.
Read full abstract