Abstract

The mean estimation task, which explicitly asks observers to estimate the mean feature value of multiple stimuli, is a fundamental paradigm in research areas such as ensemble coding and cue integration. The current study uses computational models to formalize how observers summarize information in mean estimation tasks. We compare model predictions from our Fidelity-based Integration Model (FIM) and other models on their ability to simulate observed patterns in within-trial weight distribution, across-trial information integration, and set-size effects on mean estimation accuracy. Experiments show non-equal weighting within trials in both sequential and simultaneous mean estimation tasks. Observers implicitly overestimated trial means below the global mean and underestimated trial means above the global mean. Mean estimation performance declined and stabilized with increasing set sizes. FIM successfully simulated all observed patterns, while other models failed. FIM's information sampling structure provides a new way to interpret the capacity limit in visual working memory and sub-sampling strategies. As a model framework, FIM offers task-dependent modeling for various ensemble coding paradigms, facilitating research synthesis across different studies in the literature.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.