Abstract

Governments in Singapore, India, and UK have activated surveillance, restrictive pandemic control policies, and predictive technologies to tackle the spread of COVID-19. Although some of these measures have proven efficacious, many bring with them adverse effects on fundamental rights and liberties which necessitate regulatory and policy monitoring. This project was initiated to evaluate the discriminatory consequences of COVID-19 control measures on vulnerable groups in society, so as to advocate for anti-discrimination policy outcomes and resilience-building across communities. In it, we offer six use-cases: 1) migrant workers in Singapore; 2) migrant workers in India; 3) migrant workers in the UK; 4) elderly in Singapore; 5) institutional aged care in the UK; and 6) vulnerable groups in India based on caste and race. These six cases discuss the control policies and containment strategies that too often negatively influenced the pandemic experiences of these communities across three countries. By scrutinizing control measures employed within these various jurisdictions of interest, the project aims to shed light on the interplay between discriminatory state responses and the exacerbation of pre-existing vulnerability forces. Through this exercise, this project offers early intervention approaches that promote and sustain more appropriate, ethical and equitable pandemic and crisis interventions particularly those relying on AI-assisted surveillance and data sharing. The flattening of identified pandemic healthcare inequalities will have positive ramifications for human dignity, autonomy, and rights-recognition across numerous vulnerable communities including migrant workers, the elderly, and racial minorities. Additionally, the economic benefits in maintaining productivity and reducing intervention costs can be significant. The amelioration of discriminatory outcomes will also enhance and restore confidence within these communities and trust in their respective State authorities leading to more effective pandemic containment.Note: The Centre for AI & Data Governance (CAIDG) is a research institute situated in the Singapore Management University School of Law. The Centre conducts independent research on policy, regulation, governance, ethics, and other issues relating to AI and data use. As part of our COVID data regulation and policy research, the CAIDG has been researching on COVID control strategies (employing AI-assisted technologies and big data) and its relation to cycles of vulnerability and discrimination. This project is part of our much wider commentary on the efficacy and legitimacy of COVID control measures through data, and is complementary for our upcoming TUM/BIICL collaboration on the Rule of Law, Legitimacy and Effective COVID-19 control policy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call