Considerable research has shown that people make biased decisions in “optimal stopping problems”, where options are encountered sequentially, and there is no opportunity to recall rejected options or to know upcoming options in advance (e.g. when flat hunting or choosing a spouse). Here, we used computational modelling to identify the mechanisms that best explain decision bias in the context of an especially realistic version of this problem: the full-information problem. We eliminated a number of factors as potential instigators of bias. Then, we examined sequence length and payoff scheme: two manipulations where an optimality model recommends adjusting the sampling rate. Here, participants were more reluctant to increase their sampling rates when it was optimal to do so, leading to increased undersampling bias. Our comparison of several computational models of bias demonstrates that many participants maintain these relatively low sampling rates because of suboptimally pessimistic expectations about the quality of future options (i.e. a mis-specified prior distribution). These results support a new theory about how humans solve full information problems. Understanding the causes of decision error could enhance how we conduct real world sequential searches for options, for example how online shopping or dating applications present options to users.
Read full abstract