Abstract

The availability of different pre-trained semantic models has enabled the quick development of machine learning components for downstream applications. However, even if texts are abundant for low-resource languages, there are very few semantic models publicly available. Most of the publicly available pre-trained models are usually built as a multilingual version of semantic models that will not fit well with the need for low-resource languages. We introduce different semantic models for Amharic, a morphologically complex Ethio-Semitic language. After we investigate the publicly available pre-trained semantic models, we fine-tune two pre-trained models and train seven new different models. The models include Word2Vec embeddings, distributional thesaurus (DT), BERT-like contextual embeddings, and DT embeddings obtained via network embedding algorithms. Moreover, we employ these models for different NLP tasks and study their impact. We find that newly-trained models perform better than pre-trained multilingual models. Furthermore, models based on contextual embeddings from FLAIR and RoBERTa perform better than word2Vec models for the NER and POS tagging tasks. DT-based network embeddings are suitable for the sentiment classification task. We publicly release all the semantic models, machine learning components, and several benchmark datasets such as NER, POS tagging, sentiment classification, as well as Amharic versions of WordSim353 and SimLex999.

Highlights

  • For the development of applications with semantic capabilities, models such as word embeddings and distributional semantic representations play an important role

  • We will report the results for different natural language processing (NLP) tasks using the existing and newly‐built semantic models

  • We first surveyed the limited number of pre‐trained semantic models available, which are provided as part of multilingual experiments

Read more

Summary

Introduction

For the development of applications with semantic capabilities, models such as word embeddings and distributional semantic representations play an important role. These models are the building blocks for a number of natural language processing (NLP) appli‐. The availability of pre‐trained semantic models allows researchers to focus on the actual NLP task rather than investing time in computing such models. Mantic models as the techniques and approaches used to build word representations or embeddings that can be used in different downstream NLP applications. The work by [1] indicates that word‐level representations or word embeddings have played a central role in the development of many NLP tasks.

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call