Abstract

Natural Language Processing (NLP) systems have, over the past decade, shifted from using rule-based techniques to using machine learning-based algorithms. This has led to the development of different architectures and models for different tasks. Some of these architectures include models like the transformers, the CNN and the RNN, which have now become ubiquitous in NLP. However, designing these neural network architectures usually requires in-depth analysis and knowledge of multiple domain areas involved with the problem at hand. In our work, we evaluate an alternative solution to this problem in the domain of text classification. Here, we suggest using the Genetic Algorithm with gradient descent (GAGD) and NeuroEvolution of Augmenting Topologies (NEAT) to search for an optimal neural architecture for the Reuters-21578 and 20 Newsgroups datasets. We evaluate and compare the results of the two algorithms against the current state-of-the-art architectures and provide insight into their performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call