Abstract

For the effective application of thriving human-assistive technologies in healthcare services and human–robot collaborative tasks, computing devices must be aware of human movements. Developing a reliable real-time activity recognition method for the continuous and smooth operation of such smart devices is imperative. To achieve this, light and intelligent methods that use ubiquitous sensors are pivotal. In this study, with the correlation of time series data in mind, a new method of data structuring for deeper feature extraction is introduced herein. The activity data were collected using a smartphone with the help of an exclusively developed iOS application. Data from eight activities were shaped into single and double-channels to extract deep temporal and spatial features of the signals. In addition to the time domain, raw data were represented via the Fourier and wavelet domains. Among the several neural network models used to fit the deep-learning classification of the activities, a convolutional neural network with a double-channeled time-domain input performed well. This method was further evaluated using other public datasets, and better performance was obtained. The practicability of the trained model was finally tested on a computer and a smartphone in real-time, where it demonstrated promising results.

Highlights

  • HAR is an active research field that focuses on identifying human activities from a visual or sensor input

  • Data for HAR training are mainly acquired from non-visual sensors such as IMUs and sEMG [2,4,5,6,7,8,9,10,11,12,13], visual sensors such as cameras [14,15,16], and a combination of both [17,18]

  • It is very similar to LSTM units, except in that the internal element-wise operations are replaced by convolution operations

Read more

Summary

Introduction

HAR (human activity recognition) is an active research field that focuses on identifying human activities from a visual or sensor input. Magnificent progress has been made by conventional pattern recognition methods on HAR through classical machine learning algorithms such as hidden Markov models [24,25,26,27], decision trees [28,29,30], SVMs (support vector machines) [5,29,31,32,33], and naive Bayes [34,35,36] Even though these methods achieve excellent results when few and lower-dimensional input data exist, certain domain knowledge or a controlled environment is required. Before the motion data are provided to the deep-learning algorithms, an input adaptation must be performed to influence the training performance of the networks These time series motion data were transformed and structured into various forms to obtain better classification results. The main contribution of this study is a better input adaptation method for sensorbased human activity recognition This was carried out by restructuring raw sensor data in a special manner to improve the performance of neural networks.

Background and Literature Review
Conventional Pattern Recognition
Deep Learning
Data Collection
Data Segmentation
Data Representation
Data Structuring
Neural Network Architecture
Convolutional Neural Network
Long Short-Term Memory
ConvLSTM
Experimental Results and Discussion
32 GB memory
Network Performances
Public Datasets
WISDM Dataset
UCI Dataset
Physical Activity Recognition Dataset
UniMiB SHAR Dataset
The Application Software
Real-Time Recognition
Conclusions and Future Works
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.