Abstract

This study proposes a novel method for multichannel image gray level co-occurrence matrix (GLCM) texture representation. It is well known that the standard procedure for the automatic extraction of GLCM textures is based on a mono-spectral image. In real applications, however, the GLCM texture feature extraction always refers to multi/hyperspectral images. The widely used strategy to deal with this issue is to calculate the GLCM from the first principal component or the panchromatic band, which do not include all the useful information. Accordingly, in this study, we propose to represent the multichannel textures for multi/hyperspectral imagery by the use of: (1) clustering algorithms; and (2) sparse representation, respectively. In this way, the multi/hyperspectral images can be described using a series of quantized codes or dictionaries, which are more suitable for multichannel texture representation than the traditional methods. Specifically, K-means and fuzzy c-means methods are adopted to generate the codes of an image from the clustering point of view, while a sparse dictionary learning method based on two coding rules is proposed to produce the texture primitives. The proposed multichannel GLCM textural extraction methods were evaluated with four multi/hyperspectral datasets: GeoEye-1 and QuickBird multispectral images of the city of Wuhan, the well-known AVIRIS hyperspectral dataset from the Indian Pines test site, and the HYDICE airborne hyperspectral dataset from the Washington DC Mall. The results show that both the clustering-based and sparsity-based GLCM textures outperform the traditional method (extraction based on the first principal component) in terms of classification accuracies in all the experiments.

Highlights

  • Texture analysis, which is based on the local spatial changes of intensity or color brightness, plays an important role in many applications of remote sensing imagery [1,2]

  • In order to validate the effectiveness of the proposed multichannel gray level co-occurrence matrix (GLCM) algorithms for texture feature representation and the classification of multi/hyperspectral imagery, experiments were conducted on four test images: GeoEye-1 and QuickBird multispectral images of the city of Wuhan, the well-known AVIRIS hyperspectral dataset from the Indian Pines test site, and the HYDICE

  • The traditional GLCM (Gray Level Co-occurrence Matrix) texture is calculated based on a mono-spectral image, e.g., one of the multispectral bands, the first principal component, or the panchromatic image

Read more

Summary

Introduction

Texture analysis, which is based on the local spatial changes of intensity or color brightness, plays an important role in many applications of remote sensing imagery (e.g., classification) [1,2]. The gray level co-occurrence matrix (GLCM) is a classic spatial and textural feature extraction method [5], which is widely used for texture analysis and pattern recognition for remote sensing data [3,6]. Pacifici et al [12] used multi-scale GLCM textural features extracted from very high resolution panchromatic imagery to improve urban land-use classification accuracy. Important information may be discarded or missing when extracting textures from the first principal component only In this context, in this study, we propose a multichannel GLCM textural extraction procedure for multi/hyperspectral images. Palm [14] proposed color co-occurrence matrix, which is an extension of GLCM for texture feature extraction from color images. The inverse difference describes the local homogeneity, which is high when a limited range of gray levels is distributed over the local image

Method
Clustering-Based Quantization
Sparsity-Based Image Representation
Multichannel GLCM Texture Calculation
Datasets and Parameters
GeoEye-1 Wuhan Data
Methods
QuickBird Wuhan Data
AVIRIS Indian Pines Dataset
HYDICE DC Mall Dataset
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.