Grouping processes, which “organize” a given data by eliminating the irrelevant items and sorting the rest into groups, each corresponding to a particular object, can provide reliable pre-processed information to higher level computer vision functions, such as object detection and recognition. In this paper, we consider the problem of grouping oriented segments in highly cluttered images. In this context, we have developed a general and powerful method based on an iterative, multiscale tensor voting approach. Segments are represented as second-order tensors and communicate with each other through a voting scheme that incorporates the Gestalt principles of visual perception. The key idea of our approach is removing background segments conservatively on an iterative fashion, using multi-scale analysis, and re-voting on the retained segments. We have performed extensive experiments to evaluate the strengths and weaknesses of our approach using both synthetic and real images from publicly available datasets including the Williams and Thornber’s fruit-texture dataset [L. Williams, Fruit and texture images. Available from: <http://www.cs.unm.edu/~williams/saliency.html>, 2008 (last viewed in July 2008)] and the Berkeley segmentation dataset [C.F.P. Arbelaez, D. Martin, The berkeley segmentation dataset and benchmark. Available from: <http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/segbench/>, 2008 (last viewed in July 2008)]. Our results and comparisons indicate that the proposed method improves segmentation results considerably, especially under severe background clutter. In particular, we show that using the iterative multiscale tensor voting approach to post-process the posterior probability map, produced by segmentation methods, improves boundary detection results in 84% of the grayscale test images in the Berkeley segmentation benchmark.
Read full abstract