Abstract

In the indoor environment, the activity of the pedestrian can reflect some semantic information. These activities can be used as the landmarks for indoor localization. In this paper, we propose a pedestrian activities recognition method based on a convolutional neural network. A new convolutional neural network has been designed to learn the proper features automatically. Experiments show that the proposed method achieves approximately 98% accuracy in about 2 s in identifying nine types of activities, including still, walk, upstairs, up elevator, up escalator, down elevator, down escalator, downstairs and turning. Moreover, we have built a pedestrian activity database, which contains more than 6 GB of data of accelerometers, magnetometers, gyroscopes and barometers collected with various types of smartphones. We will make it public to contribute to academic research.

Highlights

  • In the indoor environment, human activity contains rich semantic information, for example, if a user’s activity is recognized as taking an elevator, the location of the user can be inferred to the elevator

  • Sensors 2019, 19, 621 paper, we focus on the indoor activity recognition, which contains context information and can be used for indoor localization

  • We propose a deep learning-based method for indoor activity recognition by using the combination of data from multiple smartphone built-in sensors

Read more

Summary

Introduction

Human activity contains rich semantic information, for example, if a user’s activity is recognized as taking an elevator, the location of the user can be inferred to the elevator. These activities can be used as the landmarks for indoor localization and mapping [1,2,3,4,5]. The recognition of human activities has been approached in two different ways, namely ambient sensing methods and wearable sensing methods [6]. Wearable sensing methods are based on the sensors attached to the user. The wearable sensing methods can be implemented directly in smartphones [9]

Objectives
Methods
Results
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.