Abstract
Value-driven attentional capture (VDAC) refers to a phenomenon by which stimulus features associated with greater reward value attract more attention than those associated with smaller reward value. To date, the majority of VDAC research has revealed that the relationship between reward history and attentional allocation follows associative learning rules. Accordingly, a mathematical implementation of associative learning models and multiple comparison between them can elucidate the underlying process and properties of VDAC. In this study, we implemented the Rescorla-Wagner, Mackintosh (Mac), Schumajuk-Pearce-Hall (SPH), and Esber-Haselgrove (EH) models to determine whether different models predict different outcomes when critical parameters in VDAC were adjusted. Simulation results were compared with experimental data from a series of VDAC studies by fitting two key model parameters, associative strength (V) and associability (α), using the Bayesian information criterion as a loss function. The results showed that SPH-V and EH- α outperformed other implementations of phenomena related to VDAC, such as expected value, training session, switching (or inertia), and uncertainty. Although V of models were sufficient to simulate VDAC when the expected value was the main manipulation of the experiment, α of models could predict additional aspects of VDAC, including uncertainty and resistance to extinction. In summary, associative learning models concur with the crucial aspects of behavioral data from VDAC experiments and elucidate underlying dynamics including novel predictions that need to be verified.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.