Abstract
Learning the human depth localization in camera coordinate space plays a crucial role in understanding the behavior and activities of multi-person in 3D scenes. However, existing monocular-based methods rarely combine the global image features and the human body-parts features effectively, resulting in a large gap from the actual location in some cases, e.g., the special body-sized persons and mutual occlusion between humans in the image. This paper presents a novel Robust 3D Human Localization (R3HL) network consisting of two stages: global depth awareness and body-parts depth awareness, to significantly improve the robustness and accuracy of the 3D location. In the first stage, the front-back and far-near relationship estimation module based on multi-person are proposed to make the network extract depth features from the global perspective. In the second stage, the network focuses on the target human. We propose a Pose-guided Multi-person Repulsion (PMR) module to enhance the target human’s features and reduce the interference features produced by the background and other people. In addition, an Adaptive Body-parts Attention (ABA) module is designed to assign different feature weights to each joint. Finally, the human’s absolute depth is obtained through global pooling and fully connected layers. The experimental results show that the attention from the whole image to a single person helps find the absolute location of different body-sized and poses people from diverse scenes. Our method can achieve better performance than other state-of-the-art methods on both indoor and outdoor 3D multi-person datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems for Video Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.