Abstract
The K nearest neighbor (KNN) method of image analysis is practical, relatively easy to implement, and is becoming one of the most popular methods for conducting forest inventory using remote sensing data. The KNN is often named K nearest neighbor classifier when it is used for classifying categorical variables, while KNN is called K nearest neighbor regression when it is applied for predicting noncategorical variables. As an instance-based estimation method, KNN has two problems: the selection of K values and computation cost. We address the problems of K selection by applying a new approach, which is the combination of the Kolmogorov-Smirnov (KS) test and cumulative distribution function (CDF) to determine the optimal K. Our research indicates that the KS tests and CDF are much more efficient for selecting K than cross-validation and bootstrapping, which are commonly used today. We use remote sensing data reduction techniques—such as principal components analysis, layer combination, and computation of a vegetation index—to save computation cost. We also consider the theoretical and practical implications of different K values in forest inventory.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.