Abstract
Bidirectional Encoder Representations from Transformers (BERT) has gained increasing attention from researchers and practitioners as it has proven to be an invaluable technique in natural languages processing. This is mainly due to its unique features, including its ability to predict words conditioned on both the left and the right context, and its ability to be pretrained using the plain text corpus that is enormously available on the web. As BERT gained more interest, more BERT models were introduced to support different languages, including Arabic. The current state of knowledge and practice in applying BERT models to Arabic text classification is limited. In an attempt to begin remedying this gap, this review synthesizes the different Arabic BERT models that have been applied to text classification. It investigates the differences between them and compares their performance. It also examines how effective they are compared to the original English BERT models. It concludes by offering insight into aspects that need further improvements and future work.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.