Abstract

In the field of Natural Language Processing, selecting the right features is crucial for reducing unnecessary model complexity, speeding up training, and improving the ability to generalize. However, the multi-class text classification problem makes it challenging for models to generalize well, which complicates feature selection. This paper investigates how feature selection impacts model performance for multi-class text classification, using a dataset of projects completed by TÜBİTAK TEYDEB between 2009 and 2022. The study employs LSTM, a deep learning method, to classify the projects into nine different industries based on various attributes. The paper proposes a new feature selection approach based on the Apriori algorithm, which reduces the number of attribute combinations considered and makes model training more efficient. Model performance is evaluated using metrics like accuracy, loss, validation scores, and test scores. The key findings are that feature selection significantly affects model performance, and different feature sets have varying impacts on performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call