Abstract

When parsing images with regular spatial layout, the location of a pixel (x,y) can provide important prior for its semantic label. This paper proposes a novel way to leverage both location and appearance information for pixel labeling. The proposed method utilizes the spatial layout of the image by building local pixel classifiers that are location constrained, i.e., trained with pixels from a local neighborhood region only. Albeit simple, our proposed local learning works surprisingly well in different challenging image parsing problems, such as pedestrian parsing and object segmentation, and outperforms state-of-the-art results using global classifiers. To better understand the behavior of our local classifier, we perform bias-variance analysis, and demonstrate that the proposed local classifier essentially performs spatial smoothness over the global classifier that uses appearance information and location, which explains why the local classifier is more discriminative but can still handle mis-alignment. Meanwhile, our theoretical and experimental studies suggest the importance of selecting an appropriate neighborhood size to perform location constrained learning, which can significantly influence the parsing results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call