Abstract

An important component in many supervised classifiers is the estimation of one or more covariance matrices, and the often low training-sample count in supervised hyperspectral image classification yields the need for strong regularization when estimating such matrices. Often, this regularization is accomplished through adding some kind of scaled regularization matrix, e.g., the identity matrix, to the sample covariance matrix. We introduce a framework for specifying and interpreting a broad range of such regularization matrices in the linear and quadratic discriminant analysis (LDA and QDA, respectively) classifier settings. A key component in the proposed framework is the relationship between regularization and linear dimensionality reduction. We show that the equivalent of the LDA or the QDA classifier in any linearly reduced subspace can be reached by using an appropriate regularization matrix. Furthermore, several such regularization matrices can be added together forming more complex regularizers. We utilize this framework to build regularization matrices that incorporate multiscale spectral representations. Several realizations of such regularization matrices are discussed, and their performances when applied to QDA classifiers are tested on four hyperspectral data sets. Often, the classifiers benefit from using the proposed regularization matrices.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.