Abstract

This work examines the performance of various LSTM (long short-term memory) variants on social media text data. This study evaluates the performance of LSTM models with different architectures, namely, classic LSTM, Bidirectional LSTM, Stacked LSTM, gated recurrent unit (GRU), and bidirectional GRU, on a social network dataset comprising texts extracted from multiple social media platforms. We aim to identify the most effective LSTM variant of the five considered LSTM models for text analysis through a comparative study of the models’ precision, recall, F1-score, and accuracy. The research findings show that the Classic LSTM and the GRU model perform better than the other models in accuracy. In contrast, the bidirectional models (Bidirectional LSTM and Bidirectional GRU) provide better precision scores than their respective primitive models. This research has significant implications for developing more efficient models for natural language processing applications. It offers beneficial insights into the implications involving the scrutiny of depression on social media platforms through text data analysis.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.