Abstract

Understanding how the brain computes choice from sensory information is a central question of perceptual decision-making. Relevant behavioral tasks condition choice on abstract or invariant properties of the stimuli, thus decoupling stimulus-specific information from the decision variable. Among visual tasks, orientation discrimination is a gold standard; however, it is not clear if a mouse – a recently popular animal model in visual decision-making research – can learn an invariant orientation discrimination task and what choice strategies it would use. Here we show that mice can solve a discrimination task where choices are decoupled from the orientation of individual stimuli, depending instead on a measure of relative orientation. Mice learned this task, reaching an upper bound for discrimination acuity of 6 degrees and relying on decision-making strategies that balanced cognitive resources with history-dependent biases. We analyzed behavioral data from n=40 animals with the help of a novel probabilistic choice model that we used to interpret individual biases and behavioral strategies. The model explained variation in performance with task difficulty and identified unreported dimensions of variation associated with the circularity of the stimulus space. Furthermore, it showed a larger effect of history biases on animals’ choices during periods of lower engagement. Our results demonstrate that mice can learn invariant perceptual representations by combining decision-relevant stimulus information decoupled from low-level visual features, with the computation of the decision variable dependent on the cognitive state.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call