Abstract

This paper presents a study of extensions of the Explicit Semantic Analysis (ESA) used for text representation. The standard ESA algorithm leads to allocation of blocks of words in $(\mathcal{O} \vert V_2\vert \times n)$ time on average, where n is the size of the words in text corpora being the subject of analysis and $\vert V_2\vert$ stands for the size of the vocabulary. Proposed extensions have been based on the selection of training data for ESA and employs for that purpose the category structure of Wikipedia called CESA. The paper proposes the metrics for evaluation of the quality and test the performance of the methods in the function of the training data size. We also study the influence of these methods on the quality of the representation. We established that the total number of queries in case of training is $(\mathcal{O} \vert D \subseteq V_2\vert \times n)$ . Furthermore, the CESA method leads to allocation of blocks of words in $\mathcal{O} (\vert V_{1} \times V_{2} \vert \times n)$ time on average, and $\mathcal{O} (\vert V_{1} \times V_{2} \vert \times n)$ time on worse case.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call