Abstract

This research aims to explore the dual role of social media algorithms in conflict formation and resolution. Using a systematic literature review method, this research analyses how algorithms can amplify polarization and spread misinformation, as well as their potential to be leveraged in mitigating conflict and promoting constructive dialogue. The results show that algorithms designed to maximise user engagement often contribute to conflict escalation through the formation of "filter bubbles" and the spread of misinformation. However, recent research has also revealed the potential of algorithms, if designed with ethical and social principles in mind, to be instrumental in early conflict detection and the promotion of dialogue across groups. This study highlights the implications of these findings for technology companies, policymakers, and civil society, and emphasises the need for an interdisciplinary approach, proactive regulation, and increased digital literacy in addressing algorithm challenges. In conclusion, social media algorithms are flexible tools, and their impact depends on the values, principles, and goals embedded in their design. A holistic and collaborative approach is needed to harness the potential of algorithms in mitigating conflict while minimising their role in deepening social divisions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.