Abstract

<span>One of the most difficult topics in natural language understanding (NLU) is emotion detection in text because human emotions are difficult to understand without knowing facial expressions. Because the structure of Indonesian differs from other languages, this study focuses on emotion detection in Indonesian text. The nine experimental scenarios of this study incorporate word embedding (bidirectional encoder representations from transformers (BERT), Word2Vec, and GloVe) and emotion detection models (bidirectional long short-term memory (BiLSTM), LSTM, and convolutional neural network (CNN)). <span>With values of 88.28%, 88.42%, and 89.20% for Commuter Line, Transjakarta, and Commuter Line+Transjakarta, respectively,</span> BERT-BiLSTM generates the highest accuracy on the data. In general, BiLSTM produces the highest accuracy, followed by LSTM, and finally CNN. When it came to word embedding, BERT embedding outperformed Word2Vec and GloVe. In addition, the BERT-BiLSTM model generates the highest precision, recall, and F1-measure values in each data scenario when compared to other models. According to the results of this study, BERT-BiLSTM can enhance the performance of the classification model when compared to previous studies that only used BERT or BiLSTM for emotion detection in Indonesian texts.</span>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call