Abstract

Recent advances in 3D depth sensors have created many opportunities for security, surveillance, and entertainment. The 3D depth sensors provide more powerful monitoring systems for dangerous situations irrespective of lighting conditions in buildings or production facilities. To robustly recognize emergency actions or hazardous situations of workers at a production facility, we present human joint estimation and behavior recognition algorithms that solely use depth information in this paper. To estimate human joints on a low cost computing platform, we propose a human joint estimation algorithm that integrates a geodesic graph and a support vector machine (SVM). The human feature points are extracted within a range of geodesic distance from a geodesic graph. The geodesic graph is used for optimizing the estimation result. The SVM-based human joint estimator uses randomly selected human features to reduce computation. Body parts that typically involve many motions are then estimated by the geodesic distance value. The proposed algorithm can work for any human without calibration, and thus the system can be used with any subject immediately even with a low cost computing platform. In the case of the behavior recognition algorithm, the algorithm should have a simple behavior registration process, and it also should be robust to environmental changes. To meet these goals, we propose a template matching-based behavior recognition algorithm. Our method creates a behavior template set that consists of weighted human joint data with scale and rotation invariant properties. A single behavior template consists of the joint information that is estimated per frame. Additionally, we propose adaptive template rejection and a sliding window filter to prevent misrecognition between similar behaviors. The human joint estimation and behavior recognition algorithms are evaluated individually through several experiments and the performance is proven through a comparison with other algorithms. The experimental results show that our method performs well and is applicable in real environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call