Abstract

Image segmentation can be viewed as an unsupervised clustering process of pixels in the image being processed. Existing segmentation algorithms in common use often take single pixel as processing unit and segment an image mainly based on the gray value information of the image pixels. However, the spatially structural information between pixels provides even more important information on the image contents. In order to effectively exploit both the gray value and the spatial information of pixels, this paper presents an adaptive image segmentation approach based on Vector Quantization (VQ) technique. In the method, the image to be segmented is divided into small sub-blocks with each sub-block constituting a vector and the vectors are clustered by using the VQ method to implement the segmentation. The self-organizing map (SOM) neural network is adopted for realizing the VQ algorithm adaptively. In order to resolve the problem of determining the codebook size (i.e., the segment number) for the SOM-based VQ segmentation approach, an adaptive search algorithm for estimating the optimum codebook size is developed by minimizing the ratio of within-class scatter to between-class scatter in the segmentation process. Experiments have been conducted in the work by using real brain MRI images from the Internet Brain Segmentation Repository (IBSR) and some other databases and comparison studies with other algorithms in the state-of-the-art are performed. The experimental results are evaluated both via subjective comparison with human vision and also via quantitative evaluation in terms of the average overlap metric, which shows that the proposed method outperforms the other existing algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.