Abstract

K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) are the two Supervised Machine Learning Algorithms which have been used extensively in solving classification and regression problems in various application domains—medical diagnosis, pattern recognition, and image classification being the popular ones. KNN has been attractive for its simplicity—SVM has been considered as complex. Computational complexity of KNN is less compared to SVM. However, accuracy of SVM in general has been higher compared to KNN. This paper carries out an analysis of complexity of KNN and SVM and then looks at the alternative strategies to reduce the complexity. Accuracies of both SVM and KNN are analyzed from literature survey of experimental results with different training datasets. Reduction of number of training instances and usage of kernel functions are found to be the solutions for reducing SVM complexity. Factors impacting KNN accuracy are analyzed and possible alternatives are assessed. Kernel functions and distance-weighted KNN are concluded as the solution to improve KNN accuracy to compete with SVM accuracy. To utilize the best potential of the two algorithms, the option of a hybrid algorithm combining both KNN and SVM is also proposed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call