Abstract

This paper presents a novel approach for modeling threaded discussions on social media using a graph-structured bidirectional LSTM (long-short term memory) which represents both hierarchical and temporal conversation structure. In experiments with a task of predicting popularity of comments in Reddit discussions, the proposed model outperforms a node-independent architecture for different sets of input features. Analyses show a benefit to the model over the full course of the discussion, improving detection in both early and late stages. Further, the use of language cues with the bidirectional tree state updates helps with identifying controversial comments.

Highlights

  • Social media provides a convenient and widely used platform for discussions among users

  • In order to see how the model benefits from using the language cues in underpredicted and overpredicted scenarios, we look at the size of errors made by the graph-LSTM model with and without text features

  • This paper presents a novel approach for modeling threaded discussions on social media using a graph-structured bidirectional LSTM which represents both hierarchical and temporal conversation structure

Read more

Summary

Introduction

Social media provides a convenient and widely used platform for discussions among users. Popularity has been defined in terms of the volume of the response, but when the social media platform has a mechanism for readers to like or dislike comments (or, upvote/downvote), the difference in positive/negative votes provides a more informative score for popularity prediction. This definition of Transactions of the Association for Computational Linguistics, vol 6, pp. The main contributions of this paper include: a novel approach for representing tree-structured language processes (e.g., social media discussions) with LSTMs; evaluation of the model on the popularity prediction task using Reddit discussions; and analysis of the performance gains, with respect to the role of language context

Method
Graph-structured LSTM
Input Features
Pruning
Training
Experiments
Task and Evaluation Metrics
Baseline and Contrast Systems
Karma Level Prediction
Analysis
Importance of Responses
Language Use Analysis
Related Work
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.