Abstract

Behavior analysis through posture recognition is an essential research in robotic systems. Sitting with unhealthy sitting posture for a long time seriously harms human health and may even lead to lumbar disease, cervical disease and myopia. Automatic vision-based detection of unhealthy sitting posture, as an example of posture detection in robotic systems, has become a hot research topic. However, the existing methods only focus on extracting features of human themselves and lack understanding relevancies among objects in the scene, and henceforth fail to recognize some types of unhealthy sitting postures in complicated environments. To alleviate these problems, a scene recognition and semantic analysis approach to unhealthy sitting posture detection in screen-reading is proposed in this paper. The key skeletal points of human body are detected and tracked with a Microsoft Kinect sensor. Meanwhile, a deep learning method, Faster R-CNN, is used in the scene recognition of our method to accurately detect objects and extract relevant features. Then our method performs semantic analysis through Gaussian-Mixture behavioral clustering for scene understanding. The relevant features in the scene and the skeletal features extracted from human are fused into the semantic features to discriminate various types of sitting postures. Experimental results demonstrated that our method accurately and effectively detected various types of unhealthy sitting postures in screen-reading and avoided error detection in complicated environments. Compared with the existing methods, our proposed method detected more types of unhealthy sitting postures including those that the existing methods could not detect. Our method can be potentially applied and integrated as a medical assistance in robotic systems of health care and treatment.

Highlights

  • Behavior analysis through posture recognition is an essential research topic in robotic systems.More and more researchers are keen to study behavior recognition and semantic analysis [1,2,3,4,5]

  • Experimental results demonstrated that our method accurately and effectively detected various types of unhealthy sitting postures in screen-reading and avoided error detection in complicated environments

  • Unhealthy sitting posture increases the risk of occupational musculoskeletal disease, i.e., lumbar disease and cervical disease but it is closely related to the incidence of myopia

Read more

Summary

Introduction

Behavior analysis through posture recognition is an essential research topic in robotic systems. References [16,17,18] extracted human body features from a variety of sensors to estimate sitting postures These methods can to a certain extent detect some unhealthy sitting postures, they have the limitations of placing the sensors in specific contact areas. Yao et al [24] proposed a new method of judging unhealthy sitting postures based on detecting the neck angle and the torso angle using a Kinect sensor. The related references [23,24] could detect unhealthy sitting posture to a certain degree, these methods only focused on extracting features of humans themselves. A deep learning method is our usedmethod in the scene recognition method accurately detect objects and extract relevant features.

Framework of Our Proposed Method for Unhealthy Sitting Posture Detection
Scene Recognition
Multi-object Detection Using Faster R-CNN
Method
Skeleton
Skeleton extraction using
Semantic Generation using Gaussian-Mixture Clustering
Semantic Discrimination
Self-Collected Test Dataset and Some Detection Results
Quantitative
Qualitative Analysis
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call