Abstract

We demonstrate accurate spatio-temporal gait data classification from raw tomography sensor data without the need to reconstruct images. This is based on a simple yet efficient machine learning methodology based on a convolutional neural network architecture for learning spatio-temporal features, automatically end-to-end from raw sensor data. In a case study on a floor pressure tomography sensor, experimental results show an effective gait pattern classification F -score performance of 97.88 $\pm$ 1.70%. It is shown that the automatic extraction of classification features from raw data leads to a substantially better performance, compared to features derived by shallow machine learning models that use the reconstructed images as input, implying that for the purpose of automatic decision-making it is possible to eliminate the image reconstruction step. This approach is portable across a range of industrial tasks that involve tomography sensors. The proposed learning architecture is computationally efficient, has a low number of parameters and is able to achieve reliable classification F -score performance from a limited set of experimental samples. We also introduce a floor sensor dataset of 892 samples, encompassing experiments of 10 manners of walking and 3 cognitive-oriented tasks to yield a total of 13 types of gait patterns.

Highlights

  • W ITH the industrial world moving further into numerous variants of smart sensing, the already established area of industrial imaging needs to reassess long-standing paradigms in the light of the new opportunities and challenges. Indirect imaging, such as tomography, has played hitherto the role of an important utility because of problems with direct access to industrial subjects: spatial limitations introduced by physical restriction, as well as temporal limitations caused by requirements for speed and volume of data [1]

  • Tomography sensors deliver measurement data that are a spatio-temporal sample of the imaged object

  • We introduce a machinelearning model based on a convolutional neural network (CNN), a form of deep learning [7], for pattern classification and a raw sensor data transformation technique that allows the automatic extraction of features from the raw spatio-temporal tomography sensor data

Read more

Summary

INTRODUCTION

W ITH the industrial world moving further into numerous variants of smart sensing, the already established area of industrial imaging needs to reassess long-standing paradigms in the light of the new opportunities and challenges Indirect imaging, such as tomography, has played hitherto the role of an important utility because of problems with direct access to industrial subjects: spatial limitations introduced by physical restriction, as well as temporal limitations caused by requirements for speed and volume of data [1]. In the example case of industrial rheology, time sequences of reconstructed cross sections of flow have been used to present three-dimensional (3-D) models [4], [5] Beyond this substantial achievement toward flow visualization, the obtained crosssectional images and 3-D presentations still need interpretation, in terms of various flow regimes and their transitions, required for controlling an industrial process.

Sensing for Gait Analysis
Floor Sensors for Gait Analysis
CNNs to Learn Spatio-Temporal Features
VISUALIZATION OF THE RAW SPATIO-TEMPORAL SENSOR DATA
UOM-GAIT-13
METHODS
13 Reading task
Tomography Spatial Reconstruction of Foot Pressure
EXPERIMENTS
CNN Model Architecture for Spatio-Temporal RSMs
Model Training and Evaluation
Cross Validation and Feature Extraction for CNNs
Classification Performance of Spatio-Temporal RSMs
Top Performing CNN Model
Filter Maps Visualization
Model Training Time Execution Comparison
VIII. DISCUSSION AND CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.