Abstract

In order to combine multimedia imagery and multispectral remote sensing data to analyze information, preprocessing becomes a necessary part of it. It is found that the KNN algorithm is one of the classic algorithms of data mining. As one of the most important branches in the field of data analysis, it is widely used in many fields such as classification, regression, missing value filling, and machine learning. As a lazy algorithm, this method requires no prior statistical knowledge and no additional data to train description rules and is easy to implement. However, the algorithm inevitably has many problems, such as how to determine the appropriate K value, the unsatisfactory effect of data processing for some special distributions, and the unacceptable computational complexity of high-dimensional data. In order to solve these shortcomings, the researchers proposed the KNNLC algorithm. Then, taking the classification experiment as an example, through the comparison of the experimental results on different data sets, it is proved that the average level of the classification performance of the KNNLC algorithm is better than the classic KNN classification algorithm. The KNNLC algorithm shows better performance in most cases, with an accuracy rate of 2 to 5 percentage points higher. An improved algorithm is proposed for the nearest neighbor selection strategy of the traditional KNN algorithm. First, in theory, combined with the theory of sparse coding and locally constrained linear coding, the classical KNN algorithm is improved, and the KNNLC algorithm is proposed. The comparison of the experimental results on the data set proves that the average level of the KNNLC algorithm is better than the classical KNN classification algorithm in terms of classification performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call