Abstract

The K nearest neighbor (KNN) method of image analysis is practical, relatively easy to implement, and is becoming one of the most popular methods for conducting forest inventory using remote sensing data. The KNN is often named K nearest neighbor classifier when it is used for classifying categorical variables, while KNN is called K nearest neighbor regression when it is applied for predicting noncategorical variables. As an instance-based estimation method, KNN has two problems: the selection of K values and computation cost. We address the problems of K selection by applying a new approach, which is the combination of the Kolmogorov-Smirnov (KS) test and cumulative distribution function (CDF) to determine the optimal K. Our research indicates that the KS tests and CDF are much more efficient for selecting K than cross-validation and bootstrapping, which are commonly used today. We use remote sensing data reduction techniques—such as principal components analysis, layer combination, and computation of a vegetation index—to save computation cost. We also consider the theoretical and practical implications of different K values in forest inventory.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call