Abstract
In the contemporary world, people share their thoughts rapidly in social media. Mining and extracting knowledge from this information for performing sentiment analysis is a complex task. Even though automated machine learning algorithms and techniques are available, and extraction of semantic and relevant key terms from a sparse representation of the review is difficult. Word embedding improves the text classification by solving the problem of sparse matrix and semantics of the word. In this paper, a novel architecture is proposed by combining long short-term memory (LSTM) with word embedding to extract the semantic relationship between the neighboring words and also a weighted self-attention is applied to extract the key terms from the reviews. Based on the experimental analysis on the IMDB dataset, the authors have shown that the proposed architecture word-embedding self-attention LSTM architecture achieved an F1 score of 88.67%, while LSTM and word embedding LSTM-based models resulted in an F1 score of 84.42% and 85.69%, respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Ambient Computing and Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.