Abstract

Following the large-scale 2015–2016 migration crisis that shook Europe, deploying big data and social media harvesting methods became gradually popular in mass forced migration monitoring. These methods have focused on producing ‘real-time’ inferences and predictions on individual and social behavioral, preferential, and cognitive patterns of human mobility. Although the volume of such data has improved rapidly due to social media and remote sensing technologies, they have also produced biased, flawed, or otherwise invasive results that made migrants’ lives more difficult in transit. This review article explores the recent debate on the use of social media data to train machine learning classifiers and modify thresholds to help algorithmic systems monitor and predict violence and forced migration. Ultimately, it identifies and dissects five prevalent explanations in the literature on limitations for the use of such data for A.I. forecasting, namely ‘policy-engineering mismatch’, ‘accessibility/comprehensibility’, ‘legal/legislative legitimacy’, ‘poor data cleaning’, and ‘difficulty of troubleshooting’. From this review, the article suggests anonymization, distributed responsibility, and ‘right to reasonable inferences’ debates as potential solutions and next research steps to remedy these problems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.