Abstract

Computer Vision (CV) has become an essential tool for developers looking to personalize user experiences. In particular, commercial CV services can be used by those who are not machine learning experts, but who want to enhance their apps and services with vision capabilities. While the performance of CV has become increasingly human-like, its “social behaviors" and their compatibility with human values are of concern. In contrast to algorithmic decision-making, where fairness is used to evaluate system behavior, CV is often evaluated for stereotyping – the extent to which systems reflect prevalent social beliefs. This paper proposes that viewing stereotyping negatively is unhelpful in improving human-AI interaction. Rather, it is more fruitful to separate the observation of a social behavior (i.e., documenting what a machine does in relation to a human) from its judgment (i.e., relating the behavior to social norms). As norms differ across contexts and application areas, such an approach better reflects the real world, which is characterized by diversity and opposing views. However, it requires us to face up to two truths: i) humans – not machines – are the problem; ii) we must decide what degree of human-likeness we ultimately want; technologies designed to mimic us will reflect social bias.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call