Abstract

Achieving better performance has always been an important research target in the field of human activity recognition (HAR) based on mobile phone. The traditional activity recognition method mainly relies on artificial feature extraction, but the artificially selected features are not always effective, which affects the improvement of recognition accuracy. This paper mainly introduces a deep convolutional neural networks (CNN) model for human activity recognition, which can effectively improve the accuracy of human activity recognition. First of all, we manually collected 128-dimensional time domain sequence features from the accelerometer and gyroscope sensor data of the smartphone, and then we use a time domain to space domain transformation algorithm, namely Gramian Angular Fields transform algorithm, to convert these time domain signals into a 128×128 spatial signal of the image, which can take full advantage of the very effective deep learning model in the field of computer vision. Thus we can utilize the powerful feature representation capabilities of deep CNN, and then we construct an 8-layer convolutional neural network model for human activity recognition. Experimental results on UCI HAR dataset confirm the effectiveness of our method, the recognition accuracy are satisfactory and competitive compared with traditional and state of the art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.