Abstract

The use of word embedding models and deep learning algorithms are currently the most common and popular trends to enhance the overall performance of a text classification/categorization system. Word embedding models are vectors that provide a mapping of words with similar meaning to own a similar representation which is learned from a corpus. Deep learning algorithms successful produce more successful results in many areas of their applications when they are compared to the conventional machine learning algorithms. In this study, three different word embedding models Word2Vec, Glove, and FastText are employed for word representation. Instead of using conventional classification algorithms, three different deep learning architectures Recurrent Neural Networks (RNN), Long Short Term Memory Networks (LSTM) and Convolutional Neural Networks (CNN) are used for classification task by performing experiments on collections of different Turkish documents. Experimental results show that the usage of deep learning algorithms together with word embedding models advances the performance of text classification systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.