Abstract

Long short term memory (LSTM) networks have been gaining popularity in modeling sequential data such as phoneme recognition, speech translation, language modeling, speech synthesis, chatbot-like dialog systems, and others. This paper investigates the attention-based encoder-decoder LSTM networks in TIFINAGH part-of-speech (POS) tagging when it is compared to Conditional Random Fields (CRF) and Decision Tree. The attractiveness of LSTM networks is its strength in modeling long-distance dependencies. The experiment results show that Long short-term memory (LSTM) networks perform better than CRF and Decision Tree that have a near performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call