Deep Learning (DL) models, widely used in several domains, are often applied for posture recognition. This work researches five DL architectures for posture recognition: Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Transformer, hybrid CNN-LSTM, and hybrid CNN-Transformer. Agriculture and construction working postures were addressed as use cases, by acquiring an inertial dataset during the simulation of their typical tasks in circuits. Since model performance greatly depends on the choice of the hyperparameters, a grid search was conducted to find the optimal hyperparameters. An extensive analysis of the hyperparameter combinations’ effects is presented, identifying some general tendencies. Moreover, to unveil the black-box DL models, we applied the Gradient-weighted Class Activation Mapping (Grad-CAM) explainability method on CNN’s outputs to better understand the model’s decision-making, in terms of the most important sensors and time steps for each window output. Innovative hybrid architectures combining CNN and LSTM or Transformer encoder were implemented, by using the convolution feature maps as LSTM’s or Transformer’s inputs and fusing both subnetworks’ outputs with weights learned during the training. All architectures successfully recognized the eight posture classes, with the best model of each architecture exceeding 91.5% F1-score in the test. A top F1-score of 94.33%, with an inference time of just 0.29 ms (in a regular laptop), was achieved by a hybrid CNN-Transformer.
Read full abstract