A goal of image segmentation is to divide an image into regions that have some semantic meaning. Because regions of semantic meaning often include variations in colour and intensity, various segmentation algorithms that use multi-pixel textures have been developed. A challenge for these algorithms is to incorporate invariance to rotation and changes in scale. In this paper, we propose a new scale and rotation invariant, texture-based segmentation algorithm, that performs feature extraction using the Dual-Tree Complex Wavelet Transform (DT-CWT). The DT-CWT is used to analyse a signal at, and between, dyadic scales. The performance of image segmentation using this new method is compared with existing techniques over different imagery databases with operator produced ground truth data. Compared with previous algorithms, our segmentation results show that the new texture feature is capable of performing well over general images and particularly well over images containing objects with scaled and rotated textures.
Read full abstract