Abstract

Text categorization is one important task of text mining, for automated classification of large numbers of documents. Many useful supervised learning methods have been introduced to the field of text classification. Among these useful methods, K-Nearest Neighbor (KNN) algorithm is a widely used method and one of the best text classifiers for its simplicity and efficiency. For text categorization, one document is often represented as a vector composed of a series of selected words called as feature items and this method is called the vector space model. KNN is one of the algorithms based on the vector space model. However, traditional KNN algorithm holds that the weight of each feature item in various categories is identical. Obviously, this is not reasonable. For each feature item may have different importance and distribution in different categories. Considering this disadvantage of traditional KNN algorithm, we put forward a refined weighted KNN algorithm based on the idea of variance. Experimental results show that the refined weighted KNN makes a significant improvement on the performance of traditional KNN classifier.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call