From helping you and me unlock our smartphones with a mere glance to enabling the police to identify criminals via CCTV footage, artificial intelligence (AI)-powered face recognition technology (FRT) is rapidly advancing by the day. According to reports, the Indian Railways will install FRT-based video surveillance systems in 983 railway stations across the country to ramp up security. However, the accuracy and reliability of FRT depend on the quality of the input images, the algorithms used and the size and quality of the reference database. Due to the scope for major imbalances in these elements, these algorithms have been found to be biased—largely against minority communities and women, thereby exacerbating already prevalent forms of societal discrimination. Scrutinising key research studies that collectively expose racial and gender bias prevalent in widely adopted FRT tools in the United States as well as India, this article analyses how bias and inaccuracy of these tools have led to poor outcomes and raised concerns when deployed by law enforcement agencies in India. Furthermore, this article traces the historical context of these biases and proposes debiasing measures that go beyond balancing datasets.
Read full abstract