Abstract

Distributed camera network for human pose estimation can solve the problem of limited view and occlusion of single view, which has great potential for wide area surveillance applications. To fuse different field of view information, we propose a distributed human pose estimation method by combining the interactive multiple model (IMM) algorithm with the distributed information fusion of human skeleton joints. Compared with the state-of-the-art works which often use a single motion model to depict the motion of human skeleton joints, e.g., constant velocity, the novelty of our work is that the maneuvering property of human action is handled by the IMM, i.e., the motion model of human skeleton joints in the filter is approximated using constant velocity, constant acceleration, and Singer motion models. Based on the advantages of IMM algorithm for maneuvering target tracking, our method can not only solve the single-view occlusion problem, but also solve the problem of joint point fluctuation caused by the estimation error of each sensor node after distributed information fusion. The final human action recognition experimental results show that the proposed method can improve the action recognition rate on the datasets captured by Kinect V2. In addition, we built a distributed camera network using embedded machine learning boards, such that deep learning-based human pose estimation methods can be employed in our framework to handle the limitations of original Kinect SDK.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call