Abstract

Joint estimation of human body in point cloud is a key step for tracking human movements. In this work, we present a geometric method to achieve detection of the joints from a single-frame point cloud captured using a Time-of-Flight (ToF) camera. Three-dimensional (3D) human silhouette, as global feature of the single-frame point cloud, is extracted based on the pre-processed data, the angle and aspect ratio of the silhouette are subsequently utilized to perform pose recognition, and then 14 joints of human body are derived via geometric features of 3D silhouette. To verify this method, we test on an in-house captured 3D dataset containing 1200-frame depth images, which can be categorized into four different poses (upright, raising hands, parallel arms, and akimbo). Furthermore, we test on a subset of the G3D dataset. By hand-labelling the joints of each human body as the ground truth for validation and benchmarks, the average normalized error of our geometric method is less than 5.8 cm. When the distance threshold from the ground truth is 10 cm, the results demonstrate that our proposed method delivers improved performance with an average accuracy in the range of 90%.

Highlights

  • Human behavior recognition aims to interpret human behavior by a computer, and is one of the most important technologies in computer vision

  • The objective of our work is to detect joints of the human body in a single-frame point cloud, and some examples are shown in Fig. 1, the point cloud is acquired by a depth camera

  • EXPERIMENTS AND RESULTS The proposed geometric method was implemented in C++ using PCL

Read more

Summary

Introduction

Human behavior recognition aims to interpret human behavior by a computer, and is one of the most important technologies in computer vision. By analyzing human behavior in image sequences and identifying behavior categories, human behavior recognition is widely used for intelligent monitoring [1], and video analysis [2]. Human behavior can be regarded as the continuous evolution of the spatial configuration of rigid segments connected by joints [3]. If the human skeleton can be extracted and tracked reliably, the human behavior can be classified by action recognition. The detection of human joints has been widely used in the fields of virtual reality [4], automatic driving [5] and elderly care system [6]. The first step of human behavior analysis is to capture the human pose.

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call