Nowadays, it is feasible to analyze text data that is being generated at an exponential rate by transforming it into a sparse matrix of big size using a certain weighting method. A comprehensive text weighting approach consists of three fundamental components: Term Frequency, Document Frequency, and Vector Normalization. The multiplication of these three components yields numerical values that indicate the significance of a word for a text. Nevertheless, the unprocessed state of these values is unsuitable for the semantic analysis of textual material. There are multiple techniques available for this objective, and Topic Analysis, which seeks to identify subjects discussed in extensive text collections, is one of these techniques. The Non-Negative Matrix Factorization (NMF) approach is commonly employed in topic analysis. It involves transforming an input matrix into the product of two or more matrices, using both random and deterministic beginning values. This study involved conducting tests on a dataset of 20,000 articles sourced from Wikipedia, the online encyclopedia, with the aim of investigating the impact of text weighting methods and initial value approaches commonly employed in the literature on the NMF method. The number of clusters to be used in the studies was determined using an analytical procedure, which employed an upper limit. The results indicate that the “lnc” and “nnc” weighting schemes yielded the highest performance in NMF. These findings demonstrate that employing the “lnc” or “nnc” weighting scheme will lead to more favorable outcomes in the domain of topic analysis.
Read full abstract