Abstract
The traditional foreground-background segmentation models mainly depend on the low-level features of the image, while ignoring the visual effect. Combining visual perception and local features, a top-down segmentation model is proposed. This model regards foreground-background segmentation as a reasoning problem based on visual perception, and calculates the association between two-pixel blocks through Kullback–Leibler divergence, which solves the ill-posed problem of traditional single-pixel recognition. Meanwhile, local features are used to optimize the overall segmentation results in detail and improve the segmentation accuracy. The experimental results on the CMU-Cornell iCoseg database and the BSDS500 database show that visual perception and local features can help improve the segmentation performance to a certain extent.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have