Abstract

Humans constantly decide among multiple action plans. Carrying out one action usually implies that other plans are suppressed. Here we make use of inter-trial effects to determine whether suppression of non-chosen action plans is due to proactively preparing for upcoming decisions or due to retroactive influences from previous decisions. Participants received rewards for timely and accurate saccades to targets appearing left or right from fixation. Each block interleaved trials with one (single-trial) or two targets (choice-trial). Whereas single-trial rewards were always identical, rewards for the two targets in choice-trials could either be identical (unbiased) or differ (biased) within one block. We analyzed single-trial latencies as a function of idiosyncratic choice-consistency or reward-bias, the previous trial type and whether the same or the other target was selected in the preceding trial. After choice-trials, single-trial responses to the previously non-chosen target were delayed. For biased choices, inter-trial effects were strongest when choices were followed by a single-trial to the non-chosen target. In the unbiased condition, inter-trial effects increased with increasing individual consistency of choice behavior. These findings suggest that the suppression of alternative action plans is not coupled to target selection and motor execution but instead depends on top-down signals like the overall preference of one target over another.

Highlights

  • While humans interact with their environment, they constantly choose between multiple possible actions

  • We wanted to know whether the suppression of alternative motor plans arises from proactive preparation for upcoming decisions or from retroactive influence from previous decisions

  • If the alternative action plans were automatically suppressed after motor execution, non-selected action plans should be inhibited in the subsequent trial, no matter whether there was an external reason to prefer one target over the other or not

Read more

Summary

Introduction

While humans interact with their environment, they constantly choose between multiple possible actions. Selection among multiple action plans can be optimized by considering the expected value of options. Such a selection process based on value information is determined by top-down factors, and by the history of reward-based selection [1,2,3,4,5]. Learned reward associations can bias covert [6] as well as overt attentional selection [7] and continue to do so even when they compete with the top-down goals of the momentary task [8].

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call