Abstract

One of the most important issues in Text Categorization (TC) is Feature Selection (FS). Many FS methods have been put forward and widely used in TC field, such as Information Gain (IG), Document Frequency (DF) thresholding, Mutual Information (MI) and so on. Empirical studies show that IG is one of the most effective methods, DF performs similarly, in contrast, and MI had relatively poor performance. One basic research question is why these FS methods cause different performance. Many existing work answers this question based on empirical studies. This paper presents a formal study of FS based on category resolve power. First, two desirable constraints that any reasonable FS function should satisfy are defined, then a universal method for developing FS functions is presented, and a new FS function KG using this method is developed. Analysis shows that IG and KG (knowledge gain) satisfy this universal method. Experiments on Reuters-21578 collection, NewsGroup collection and OHSUMED collection show that KG and IG get the best performance, even KG performs better than the IG method in two collections. These experiments imply that the universal method is very effective and gives a formal evaluation criterion for FS method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call