Abstract
Neural networks for language modeling have been proven effective on several sub-tasks of natural language processing. Training deep language models, however, is time-consuming and computationally intensive. Pre-trained language models such as BERT are thus appealing since (1) they yielded state-of-the-art performance, and (2) they offload practitioners from the burden of preparing the adequate resources (time, hardware, and data) to train models. Nevertheless, because pre-trained models are generic, they may underperform on specific domains. In this study, we investigate the case of multi-class text classification, a task that is relatively less studied in the literature evaluating pre-trained language models. Our work is further placed under the industrial settings of the financial domain. We thus leverage generic benchmark datasets from the literature and two proprietary datasets from our partners in the financial technological industry. After highlighting a challenge for generic pre-trained models (BERT, DistilBERT, RoBERTa, XLNet, XLM) to classify a portion of the financial document dataset, we investigate the intuition that a specialized pre-trained model for financial documents, such as FinBERT, should be leveraged. Nevertheless, our experiments show that the FinBERT model, even with an adapted vocabulary, does not lead to improvements compared to the generic BERT models.
Full Text
Topics from this Paper
Pre-trained Models
Pre-trained Language Models
Multi-Class Text Classification
Language Models
Financial Domain
+ Show 5 more
Create a personalized feed of these topics
Get StartedTalk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Apr 19, 2021
Dec 16, 2021
Jan 1, 2022
Jan 1, 2021
Jan 9, 2023
Jan 1, 2022
Applied Soft Computing
Dec 1, 2021
Jan 1, 2022
May 11, 2022
May 6, 2022
Bioinformatics (Oxford, England)
Oct 12, 2021
Dec 17, 2022
Jul 19, 2022