Abstract

Numerous methods and applications have been proposed in human activity recognition (HAR). This paper presents a mini-survey of recent HAR studies and our originally developed benchmark datasets of two types using environmental sensors. For the first dataset, we specifically examine human pose estimation and slight motion recognition related to activities of daily living (ADL). Our proposed method employs OpenPose. It describes feature vectors without effects of objects or scene features, but with a convolutional neural network (CNN) with the VGG-16 backbone, which recognizes behavior patterns after classifying the obtained images into learning and verification subsets. The first dataset comprises time-series panoramic images obtained using a fisheye lens monocular camera with a wide field of view. We attempted to recognize five behavior patterns: eating, reading, operating a smartphone, operating a laptop computer, and sitting. Even when using panoramic images including distortions, results demonstrate the capability of recognizing properties and characteristics of slight motions and pose-based behavioral patterns. The second dataset was obtained using five environmental sensors: a thermopile sensor, a CO2 sensor, and air pressure, humidity, and temperature sensors. Our proposed sensor system obviates the need for constraint; it also preserves each subject’s privacy. Using a long short-term memory (LSTM) network combined with CNN, which is a deep-learning model dealing with time-series features, we recognized eight behavior patterns: eating, operating a laptop computer, operating a smartphone, playing a game, reading, exiting, taking a nap, and sitting. The recognition accuracy for the second dataset was lower than for the first dataset consisting of images, but we demonstrated recognition of behavior patterns from time-series of weak sensor signals. The recognition results for the first dataset, after accuracy evaluation, can be reused for automatically annotated labels applied to the second dataset. Our proposed method actualizes semi-automatic annotation, false recognized category detection, and sensor calibration. Feasibility study results show the new possibility of HAR used for ADL based on unique sensors of two types.

Highlights

  • IntroductionHuman activity recognition (HAR) is a challenging task for pattern recognition and computer vision studies, especially when using off-the-shelf camera technology [1]

  • The recognition accuracy for the second dataset was lower than that obtained for the first dataset, this paper presents an exploration of a new possibility of using human activity recognition (HAR) for activities of daily living (ADL) based on unique sensor systems and two original datasets

  • Based on investigation and analysis of these existing studies, we examine tiny signal features of human motions related to HAR and ADL

Read more

Summary

Introduction

Human activity recognition (HAR) is a challenging task for pattern recognition and computer vision studies, especially when using off-the-shelf camera technology [1] Numerous applications and their derivative variations exist in the areas of multimodal gesture recognition [2], consumption and consumer behavior analysis [3], human–robot interaction [4], robotics therapy [5], and body motion analysis in sports [6]. Development of a vision-based system [11] and an invisible sensor system [12] for bedleaving detection and fall prevention at hospitals and nursing care sites has been reported Such systems emphasize abnormality detection from the identification of behavior patterns

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.