Abstract

Scene parsing is a challenging research area in computer vision. It provides a semantic label for each pixel in image. Most scene parsing approaches are parametric based which need a model that is acquired through a learning stage. In this paper, a new nonparametric approach to scene parsing is proposed which does not require a learning stage. All introduced nonparametric approaches are based on patch correspondence. Our proposed method does not require explicit patch matching which makes it fast and effective. The proposed approach has two parts. In the first part, a new generative approach to transfer semantic labels from a training image to an unlabelled test image is proposed. To do this, a graphical model is constructed over regions of both the training and test images. Then, based on the proposed graphical model, a quadratic convex function is defined on likelihood probability of each region. Cost function is defined such that contextual information and object-level information are both considered. In the second part of our approach, by using the proposed method of transfer knowledge, a new nonparametric scene parsing approach is given. To evaluate the proposed approach, it is applied on the MSRC-21, Stanford background, LMO, and SUN datasets. The obtained results show that our approach outperforms comparable state-of-the-art nonparametric approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call