Abstract
A problem of sentiment analysis is the text representation, that encodes text into a continuous vector by arranging projections from semantics to the points in high dimensional space. Deep learning methods have been widely used to solve various sentiment analysis problems. To improve the performance of deep learning in sentiment analysis requires a good method of text representation to be used as an embedding layer. In this study, we analyzed deep learning with the Recurrent Neural Network (RNN) method with Long Short-Term Memory (LSTM) variants in sentiment classification. We compare the performance of the LSTM network with Word2Vec to the network without Word2Vec as word embedding. Sentiment data used is derived from user-provided reviews of the applications in Google Play. The training process involves the Dropout layer and Early Stopping points to prevent overfitting. The results showed that the LSTM network using a word embedding Word2Vec is better than without Word2Vec. LSTM with Word2Vec 300 words dimension got a low error value of 0.3287 with an accuracy of 86.76%. While the LSTM testing results without Word2Vec get the lowest error of 0.3751 with an accuracy of 84.14%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.