Abstract

Superpixel generation is increasingly an important area for computer vision tasks. While superpixels with highly regular shapes are preferred to make the subsequent processing easier, the accuracy of the superpixel boundaries is also necessary. Previous methods usually depend on a distance function considering both spatial and color coherency regularization on the whole image, which however is hard to balance between shape regularity and boundary adherence, especially when the desired number of superpixels is small. In addition, non-adaptive parameters and insufficient contour information also affect the performance of segmentation. To mitigate these problems, we propose a robust divide-and-conquer superpixel segmentation method, of which the core idea is that we apply a new contour information extraction and a pixel clustering to separate the input image into flat and non-flat regions, where the former targets shape regularity and the latter emphasizes boundary adherence, followed by an efficient hierarchical merging to clean up tiny and dangling superpixels. Our algorithm requires no additional parameter tuning except the desired number of superpixels since our internal parameters are self-adaptive to the image contents. Experimental results demonstrate that for public benchmark datasets, our algorithm consistently generates more regular superpixels with stronger boundary adherence than state-of-the-art methods while maintaining a competitive efficiency. The source code is available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/YunyangXu/HQSGRD</uri> .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call