Abstract

Human activity recognition aims to classify the user activity in various applications like healthcare, gesture recognition and indoor navigation. In the latter, smartphone location recognition is gaining more attention as it enhances indoor positioning accuracy. Commonly the smartphone’s inertial sensor readings are used as input to a machine learning algorithm which performs the classification. There are several approaches to tackle such a task: feature based approaches, one dimensional deep learning algorithms, and two dimensional deep learning architectures. When using deep learning approaches, feature engineering is redundant. In addition, while utilizing two-dimensional deep learning approaches enables to utilize methods from the well-established computer vision domain. In this paper, a framework for smartphone location and human activity recognition, based on the smartphone’s inertial sensors, is proposed. The contributions of this work are a novel time series encoding approach, from inertial signals to inertial images, and transfer learning from computer vision domain to the inertial sensors classification problem. Four different datasets are employed to show the benefits of using the proposed approach. In addition, as the proposed framework performs classification on inertial sensors readings, it can be applied for other classification tasks using inertial data. It can also be adopted to handle other types of sensory data collected for a classification task.

Highlights

  • Four different datasets are used for the evaluation of the proposed approach and comparison to other approaches. All those datasets contain accelerometers and gyroscopes measurements, each labeled with a specific activity label

  • Human activity recognition is an important task in various applications like healthcare, gesture recognition and indoor navigation

  • Inertial Image (INIM) for inertial images, transforms the accelerometer and gyroscope signals into images enabling the usage of proved architectures and tools from the computer vision domain

Read more

Summary

Introduction

Human activity recognition (HAR) aims to classify the user activity in various applications such as gesture recognition [1,2], healthcare [3], home behaviour analysis [4], indoor navigation [5,6], and many more. Focusing on activity recognition for navigation applications, one of the branches of HAR is smartphone location recognition (SLR). The SLR goal is to classify the current location of the smartphone on the user. Both HAR and SLR utilizes the smartphone inertial sensors, namely the accelerometers and gyroscopes, readings to perform the classification task. Both HAR and SLR are gaining more attention in the navigation community.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call