Abstract

ABSTRACT This work presents an on-device machine learning model with the ability to identify different mobility gestures called human activity recognition (HAR), which includes running, walking, squatting, jumping, and others. The data is collected through an Arduino Nano 33 BLE Sense board with a sampling rate of 119 Hz, which is embedded with an Inertial Measurement Unit (IMU) sensor. The same board is used as a microcontroller to identify human gestures by developing an end-to-end edge computing application. A deep neural network model is trained and then compressed for deployment on the board to create a self-contained, embedded device capable of identifying the type of gesture performed. Three deep learning models, namely Multi-Layer Perceptron (MLP), Convolutional Neural Network – Long Short Term Memory (CNN-LSTM) & CNN-Gated Recurrent Unit (CNN-GRU), are evaluated in the identification of the mobility gestures. The observed accuracy of the models is 96%, 97.1% and 97.8%, respectively, MLP, CNN-LSTM & CNN-GRU across the different gesture categories. The study shows the utility of embedded devices with deep neural network-based models, which can provide low cost, minimal power usage, and meet data privacy requirements in HAR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call