Abstract

Web search increasingly provides a platform for users to seek advice on important personal decisions but may be biased in several different ways. One result of such biases is the search engine manipulation effect (SEME): when a list of search results relates to a debated topic (e.g., veganism) and promotes documents pertaining to a particular viewpoint (e.g., by ranking them higher), users tend to adopt this advantaged viewpoint. However, the detection and mitigation of SEME are complicated by the current lack of empirical understanding of its underlying mechanisms. This dissertation aims to investigate which (and to what degree) algorithmic and cognitive biases play a role in SEME concerning debated topics. RQ1. What set of labels can accurately represent viewpoints of textual documents on debated topics? Studying algorithmic and cognitive biases in the context of web search on debated topics requires accurate labeling of documents. RQ1 investigates how to best represent viewpoints of textual documents on debated topics. The first step in this work was introducing perspectives as an additional dimension of viewpoint labels for textual documents (i.e., adding people's underlying motivations for taking a given stance) and showing how they can be automatically discovered using Joint Topic Models. My future research will evaluate whether viewpoint labels consisting of stances and perspectives are accurate representations (or whether more nuanced notions are necessary) and describe how to obtain these labels. The work on RQ1 will result in a framework to accurately represent viewpoints on debated topics expressed by textual documents. This will allow for algorithmic assessment of viewpoint-related ranking bias in search results and alignment of document viewpoints with users' viewpoints. RQ2. What methods can automatically measure viewpoint-related ranking bias in search results? Several methods have been proposed to measure ranking bias, fairness, and diversity in search results. RQ2 investigates which of these (or novel) methods can be used to assess viewpoint-related ranking bias. The first contribution to RQ2 was demonstrating how to assess viewpoint-related ranking bias in search results using ranking fairness metrics for categorical viewpoint labels and evaluated which specific methods work best in which situation. Going forward, I plan to develop methods that assess viewpoint-related ranking bias in more complex settings. Furthermore, I aim to assess viewpoint-related ranking bias in real search results on debated topics. This work will contribute novel evaluation metrics that measure viewpoint-related ranking bias in search results, a set of guidelines for when and how to use them using a web-based demo, as well as directions for practitioners regarding viewpoint-related ranking bias in real search results. RQ3. What cognitive biases may contribute to the process of attitude change on debated topics in users of web search engines? Being able to measure algorithmic ranking bias is not yet enough to understand its effect on human behavior. RQ3 aims at understanding which specific cognitive biases are responsible for SEME; i.e., what reasoning mistakes users make when they change their attitudes after viewing search results. The first contribution to RQ3 was evaluating in a user study whether order effects alone can cause SEME. We found that this may not be the case and describe exploratory results that show that exposure effects may play a more important role in causing SEME than previously anticipated. My future work in this area will consider findings from RQ1 and RQ2 to draw more realistic scenarios of SEME and study interactions between algorithmic and different cognitive biases. The result of this work will be a set of guidelines for how SEME could be avoided by mitigating cognitive user biases in web search.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call