This research aims to explore the dual role of social media algorithms in conflict formation and resolution. Using a systematic literature review method, this research analyses how algorithms can amplify polarization and spread misinformation, as well as their potential to be leveraged in mitigating conflict and promoting constructive dialogue. The results show that algorithms designed to maximise user engagement often contribute to conflict escalation through the formation of "filter bubbles" and the spread of misinformation. However, recent research has also revealed the potential of algorithms, if designed with ethical and social principles in mind, to be instrumental in early conflict detection and the promotion of dialogue across groups. This study highlights the implications of these findings for technology companies, policymakers, and civil society, and emphasises the need for an interdisciplinary approach, proactive regulation, and increased digital literacy in addressing algorithm challenges. In conclusion, social media algorithms are flexible tools, and their impact depends on the values, principles, and goals embedded in their design. A holistic and collaborative approach is needed to harness the potential of algorithms in mitigating conflict while minimising their role in deepening social divisions.
Read full abstract