Abstract

Human parsing, which is a task of labeling pixels in human images into different fine-grained semantic parts, has achieved significant progress during the past decade. However, there are still several challenges in human parsing, due to occlusions, varying poses and similar appearance between the left/right parts. To tackle these problems, a Human Kinematic Skeleton Graph Layer (HKSGL) is proposed to augment regular neural networks with human kinematic skeleton information. The HKSGL has two major components: kinematic skeleton graph and interconnected modular neural layer. The kinematic skeleton graph is a user pre-defined skeleton graph, which models the interconnections between different semantic parts. Then the skeleton graph is passed to the interconnected modular neural layer which is composed of a set of modular blocks corresponding to different semantic parts. The HKSGL is a lightweight, low costs layer which can be easily attached to any existing neural networks. To demonstrate the power of the HKSGL, a new dataset on human parsing in occlusions is also collected, termed the RAP-Occ. Extensive experiments have been performed on four datasets on human parsing, including the LIP, the CIHP, the ATR and the RAP-Occ. And two popular baselines, i.e., the Deeplab V3+ and the CE2P, are agumented by the proposed HKSGL. Competitive performance of the augmented models has been achieved in comparison with state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call