Abstract

Nearest neighbor (NN) techniques are commonly used in remote sensing, pattern recognition, and statistics to classify objects into a predefined number of categories based on a given set of predictors. These techniques are particularly useful in those cases exhibiting a highly nonlinear relationship between variables. In most studies, the distance measure is adopted a priori. In contrast, we propose a general procedure to find Euclidean metrics in a low-dimensional space (i.e., one in which the number of dimensions is less than the number of predictor variables) whose main characteristic is to minimize the variance of a given class label of all those pairs of points whose distance is less than a predefined value. k-NN is used in each embedded space to determine the possibility that a query belongs to a given class label. The class estimation is carried out by an ensemble of predictions. To illustrate the application of this technique, a typical land cover classification using a Landsat-5 Thematic Mapper scene is presented. Experimental results indicate substantial improvement with regard to the classification accuracy as compared with approaches such as maximum likelihood, linear discriminant analysis, standard k-NN, and adaptive quasi-conformal kernel k-NN.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.