Abstract

In recent years, damage caused by negative tweets has become a social problem. In this paper, we consider a method of suppressing negative tweets by using reinforcement learning. In particular, we consider the case where tweet writing is modeled as a multi-agent environment. Numerical experiments verify the effects of suppression using various reinforcement learning methods. We will also verify robustness to environmental changes. We compared the results of Profit Sharing (PS) and Q-learning (QL) as reinforcement learning methods to confirm the effectiveness of PS, and confirmed the behavior of the rationality theorem in a multi-agent environment. Furthermore, in experiments regarding the ability to follow environmental changes, it was confirmed that PS is more robust than QL. If machines can appropriately intervene and interact with posts made by humans, we can expect that negative tweets and even blow-ups can be suppressed automatically without the need for costly human eye monitoring.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call