Abstract

In this paper, the primary focus is of Slot Tagging of Gujarat Dialogue, which enables the Gujarati language communication between human and machine, allowing machines to perform given task and provide desired output. The accuracy of tagging entirely depends on bifurcation of slots and word embedding. It is also very challenging for a researcher to do proper slot tagging as dialogue and speech differs from human to human, which makes the slot tagging methodology more complex. Various deep learning models are available for slot tagging for the researchers, however, in the instant paper it mainly focuses on Long Short-Term Memory (LSTM), Convolutional Neural Network - Long Short-Term Memory (CNN-LSTM) and Long Short-Term Memory – Conditional Random Field (LSTM-CRF), Bidirectional Long Short-Term Memory (BiLSTM), Convolutional Neural Network - Bidirectional Long Short-Term Memory (CNN-BiLSTM) and Bidirectional Long Short-Term Memory – Conditional Random Field (BiLSTM-CRF). While comparing the above models with each other, it is observed that BiLSTM models performs better than LSTM models by a variation ~2% of its F1-measure, as it contains an additional layer which formulates the word string to traverse from backward to forward. Within BiLSTM models, BiLSTM-CRF has outperformed other two Bi-LSTM models. Its F1-measure is better than CNN-BiLSTM by 1.2% and BiLSTM by 2.4%.KeywordsSpoken Language Understanding (SLU)Long Short-Term Memory (LSTM)Slot taggingBidirectional Long Short-Term Memory (BiLSTM)Convolutional Neural Network - Bidirectional Long Short-Term Memory (CNN-BiLSTM)Bidirectional Long Short-Term Memory (BiLSTM-CRF)

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call