Abstract
K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) are the two Supervised Machine Learning Algorithms which have been used extensively in solving classification and regression problems in various application domains—medical diagnosis, pattern recognition, and image classification being the popular ones. KNN has been attractive for its simplicity—SVM has been considered as complex. Computational complexity of KNN is less compared to SVM. However, accuracy of SVM in general has been higher compared to KNN. This paper carries out an analysis of complexity of KNN and SVM and then looks at the alternative strategies to reduce the complexity. Accuracies of both SVM and KNN are analyzed from literature survey of experimental results with different training datasets. Reduction of number of training instances and usage of kernel functions are found to be the solutions for reducing SVM complexity. Factors impacting KNN accuracy are analyzed and possible alternatives are assessed. Kernel functions and distance-weighted KNN are concluded as the solution to improve KNN accuracy to compete with SVM accuracy. To utilize the best potential of the two algorithms, the option of a hybrid algorithm combining both KNN and SVM is also proposed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.