Abstract

This article presents an extensive and global survey on the use of Artificial Intelligence (AI) to address the COVID-19 epidemic and a comprehensive discussion of the ethical and human rights implications of AI's deployment during the pandemic. AI applications contributed to the COVID-19 response including through early warnings and alerts; tracking and prediction; diagnosis and prognosis; drug treatments; and social and medical management. There are human rights issues and ethical risks to consider with these uses of AI technology, for example, equality, non-discrimination and accessibility – particularly as they impact on gender, ethnicity, locality, and wealth. To understand risks before relying on such methods, we must assess whether data can be collected any more effectively from people in remote or disadvantaged areas than with the traditional methods. Other questions would include whether information is gathered equally from women and men, and older people; do all ethnicities have equal access to phones and mobile data; and does the cost of internet access and data use discriminate against poorer people? While the 'coronacrisis' advanced AI-based responses to global health emergencies, this wide-reaching AI capacity, raises an array of ethical and human rights challenges. The need for governments to act quickly and globally in tackling the coronavirus resulted in unprecedented practices amid a lack of public trust. AI technologies assisted governments to curb the global epidemiological threat. Yet, the application of these tools threatened fundamental rights. AI based interventions such as contact tracing raised valid fears of “surveillance creep”. Global human rights are implicated in the measures targeting the spread of COVID-19-related misinformation. Concern over the impact of the internet as a carrier of fake news amplified during COVID-19. Conspiracy theories and alternative narratives mushroomed all over the world. While some fears of misinformation in the current context are valid, the pandemic resulted in an unprecedented global crackdown on freedom of expression. AI applications during the pandemic challenged widely-held commitments to privacy, autonomy, and civil liberties. The “coronacrisis” was viewed by some as a perfect storm to undermine rights to privacy, as effected by surveillance, and freedom of expression. Tech-based responses to COVID-19 included drone surveillance, facial recognition technologies, contact-tracing and quarantine-enforcement apps. With no expiration date or sunset clauses in sight for these technologies' deployment, there are concerns that these surveillance measures could deteriorate privacy further and long-term. While AI is a powerful tool, humans remain central in evaluating and interpreting its output and its ethical application. Human input, across disciplines, remains needed for the optimal application of AI against COVID-19 and other contexts. Overcoming the lack of data needed to optimize AI as a pandemic tool will require a careful balance between data privacy and public health. Increasing diagnostic data is valuable and essential to save lives, train AI, and harness AI for other public health applications. Due to the technical, ethical, and human rights risks, AI must develop alongside human rights and ethical considerations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call