Abstract

Past work on unsupervised segmentation of a texture image has been based on several restrictive assumptions to reduce the difficulty of this challenging segmentation task. Typically, a fixed number of different texture regions is assumed and each region is assumed to be generated by a simple model. Also, different first order statistics are used to facilitate discrimination between different textures. This paper introduces an approach to unsupervised segmentation that offers promise for handling unrestricted natural scenes containing textural regions. A simple but effective feature set and a novel measure of dissimilarity are used to accurately generate boundaries between an unknown number of regions without using first order statistics or texture models. A two stage approach is used to partition a texture image. In the first stage, a set of sliding windows scans the image to generate a sequence of feature vectors. The windowed regions providing the highest inhomo-geneity in their textural characteristics determine a crude first-stage boundary, separating textured areas that are unambiguously homogeneous from one another. These regions are used to estimate a set of prototype feature vectors. In the second stage, supervised segmentation is performed to obtain an accurate boundary between different textured regions by means of a constrained hierarchical clustering technique. Each inhomo-geneous window obtained in the first stage is split into four identical subwindows for which the feature vectors are estimated. Each of the subwindows is assigned to a homogeneous region to which it is connected. This region is chosen according to the closest prototype vector in the feature space. Any two adjacent subwindows that are assigned to different regions will in turn be considered as inhomogeneous windows and each is then split into four subwindows. The classification scheme is repeated in this hierarchical manner until the desired boundary resolution is achieved. The technique has been tested on several multi-texture images yielding accu-rate segmentation results comparable or superior to the performance obtained by human visual segmentation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.