Abstract

Robert Williams made the headlines in the United States in 2020 when he was wrongly identified by a facial recognition system and arrested as a wanted criminal.1 The root cause of this appalling violation of Williams’s personal rights was traced to a bias in the underlying machine learning system, which was trained using mostly images with a predominantly white European appearance. Clearly, in this case, not enough attention was given to the ethnic diversity of the U.S. population. But, more importantly, it illustrates how people who do not use a given software system (or are even unaware of its existence) can be affected by its operation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call