Abstract

This paper examines the autocomplete algorithmic bias of leading search engines against three sensitive attributes: gender, race, and sexual orientation. By simulating search query prefixes and calling search engine APIs, 106,896 autocomplete predictions were collected, and their semantic toxicity scores as measures of negative algorithmic bias were computed based on machine learning models. Results indicate that search engine autocomplete algorithmic bias is overall consistent with long-standing societal discrimination. Historically disadvantaged groups such as the female, the Black, and the homosexual suffer higher levels of negative algorithmic bias. Moreover, the degree of algorithmic bias varies across topic categories. Implications about the search engine mediatization, mechanisms and consequences of autocomplete algorithmic bias are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call