Abstract
Efficient and accurate text classification is essential for a wide range of natural language processing applications, including sentiment analysis, spam detection and machine-generated text identification. While recent advancements in transformer-based large language models have achieved remarkable performance, they often come with significant computational costs, limiting their applicability in resource-constrained environments. In this work, we propose TextNeX, a new ensemble model that leverages lightweight language models to achieve state-of-the-art performance while maintaining computational efficiency. The development process of TextNeX model follows a three-phase procedure: (i) Expansion: generation of a pool of diverse lightweight models via randomized model setups and variations of training data; (ii) Selection: application of a clustering-based heterogeneity-driven selection to retain the most complementary models and (iii) Ensemble optimization: optimization of the selected models’ contributions using sequential quadratic programming. Experimental evaluations on three challenging text classification datasets demonstrate that TextNeX outperforms existing state-of-the-art ensemble models in accuracy, robustness and computational effectiveness, offering a practical alternative to large-scale models for real-world applications.
Published Version
Join us for a 30 min session where you can share your feedback and ask us any queries you have