Abstract

Due to the increasing number of mobile robots including domestic robots for cleaning and maintenance in developed countries, human activity recognition is inevitable for congruent human-robot interaction. Needless to say that this is indeed a challenging task for robots, it is expedient to learn human activities for autonomous mobile robots (AMR) for navigating in an uncontrolled environment without any guidance. Building a correct classifier for complex human action is non-trivial since simple actions can be combined to recognize a complex human activity. In this paper, we trained a model for human activity recognition using convolutional neural network. We trained and validated our model using the Vicon physical action dataset and also tested the model on our generated dataset (VMCUHK). Our experiment shows that our method performs with high accuracy, human activity recognition task both on the Vicon physical action dataset and VMCUHK dataset.

Highlights

  • Human activity recognition (HAR) is important for autonomous robot’s interaction of with real world object, environment and people to further enhance its capabilities

  • Human activity recognition involves the interpretation of human actions or gesture from a series of human activities

  • We used depthwise convolution which is an effective method to reduce computational complexity of deep neural networks. It consists of independently performed spatial convolution over each input channel followed by a 1x1 convolution output channel [10]. This output serves as input to the Rectified Linear Unit (ReLU) activation function and we performed 1D max pooling on the output of convolution layer

Read more

Summary

INTRODUCTION

Human activity recognition (HAR) is important for autonomous robot’s interaction of with real world object, environment and people to further enhance its capabilities. A domestic service robot for cleaning can recognize a sequence of activity as “sitting” followed by “standing” and “walking” (meaning the person has left the current environment) and discretionally clean up as a response to such combination of activities. This has several applications in catering for needs of elderly people living alone or people with disability. The objective is to correctly classify 3D human activities from kinematic data. The goal is to combine individual human action for activity recognition for mobile robot. The rest of this paper is organized as follows: Section II describes the model and parameters used; Section III describes the experimental set up, data and data preprocessing task; Section IV discusses the findings; Section V concludes the paper

MODEL DESCRIPTION AND PARAMETER SETTINGS
DATA PREPROCESSING AND EXPERIMENTAL SETUP
Model Validation through Vicon physical activity dataset
Model Testing through VMCUHK
EXPERIMENTAL RESULT AND ANALYSIS
Findings
CONCLUSION AND FUTURE WORK
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.