Abstract

Over the years, many different algorithms are proposed to improve the accuracy of the automatic parts of speech tagging. High accuracy of parts of speech tagging is very important for any NLP application. Powerful models like The Hidden Markov Model (HMM), used for this purpose require a huge amount of training data and are also less accurate to detect unknown (untrained) words. Most of the languages in this world lack enough resources in the computable form to be used during training such models. NLP applications for such languages also encounter many unknown words during execution. This results in a low accuracy rate. Improving accuracy for such low-resource languages is an open problem. In this paper, one stochastic method and a deep learning model are proposed to improve accuracy for such languages. The proposed language-independent methods improve unknown word accuracy and overall accuracy with a low amount of training data. At first, bigrams and trigrams of characters that are already part of training samples are used to calculate the maximum likelihood for tagging unknown words using the Viterbi algorithm and HMM. With training datasets below the size of 10K, an improvement of 12% to 14% accuracy has been achieved. Next, a deep neural network model is also proposed to work with a very low amount of training data. It is based on word level, character level, character bigram level, and character trigram level representations to perform parts of speech tagging with less amount of available training data. The model improves the overall accuracy of the tagger along with improving accuracy for unknown words. Results for “English” and a low resource Indian Language “Assamese” are discussed in detail. Performance is better than many state-of-the-art techniques for low resource language. The method is generic and can be used with any language with very less amount of training data.

Highlights

  • Parts of speech tagging can be viewed as the problem of word classification

  • This paper describes www.ijacsa.thesai.org (IJACSA) International Journal of Advanced Computer Science and Applications, Vol 12, No 10, 2021 two specific works to improve performance automatic parts of speech tagging for languages with low quantity of resources in computable form

  • The overall accuracy is higher than 80% even when it is exposed to a test dataset of size equal to that of the training set

Read more

Summary

Introduction

Parts of speech tagging can be viewed as the problem of word classification Each such class contains words having some common properties regarding their usage in the sentences. Any Language Processing task, depends heavily on the accuracy of tagging. Most of the developments of natural language processing are observed for a few dominant languages spoken widely in the world. This is because of a lack of extensive research and the non-availability of computable resources for other languages. It is very important to identify the key factors that affect the accuracy and to make proper use of them so that such languages can benefit from the advancements of natural language processing techniques. It is important to design systems that can be used across any language so that the benefit can be transferred to any language

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call