In daily life, we can not only estimate confidence in our inferences ('I'm sure I failed that exam'), but can also estimate whether those feelings of confidence are good predictors of decision accuracy ('I feel sure I failed, but my feeling is probably wrong; I probably passed'). In the lab, by using simple perceptual tasks and collecting trial-by-trial confidence ratings visual metacognition research has repeatedly shown that participants can successfully predict the accuracy of their perceptual choices. Can participants also successfully evaluate 'confidence in confidence' in these tasks? This is the question addressed in this study. Participants performed a simple, two-interval forced choice numerosity task framed as an exam. Confidence judgements were collected in the form of a 'predicted exam grade'. Finally, we collected 'meta-metacognitive' reports in a two-interval forced-choice design: trials were presented in pairs, and participants had to select that in which they thought their confidence (predicted grade) best matched their accuracy (actual grade), effectively minimizing their quadratic scoring rule (QSR) score. Participants successfully selected trials on which their metacognition was better when metacognitive performance was quantified using area under the type 2 ROC (AUROC2) but not when using the 'gold-standard' measure m-ratio. However, further analyses suggested that participants selected trials on which AUROC2 is lower in part via an extreme-confidence heuristic, rather than through explicit evaluation of metacognitive inferences: when restricting analyses to trials on which participants gave the same confidence rating AUROC2 no longer differed as a function of selection, and likewise when we excluded trials on which extreme confidence ratings were given. Together, our results show that participants are able to make effective metacognitive discriminations on their visual confidence ratings, but that explicit 'meta-metacognitive' processes may not be required.
Read full abstract