Abstract

In four experiments we assessed the effect of systemic amphetamine on the ability of a stimulus paired with reward and a stimulus that was not paired with reward to support instrumental conditioning; i.e., we trained rats to press two levers, one followed by a stimulus that had been trained in a predictive relationship with a food outcome and the other by a stimulus unpaired with that reward. Here we show, in general accord with predictions from the dopamine re-selection hypothesis [Redgrave and Gurney (2006). Nat. Rev. Neurosci. 7, 967–975], that systemic amphetamine greatly enhanced the performance of lever press responses that delivered a visual stimulus whether that stimulus had been paired with reward or not. In contrast, amphetamine had no effect on the performance of responses on an inactive lever that had no stimulus consequences. These results support the notion that dopaminergic activity serves to mark or tag actions associated with stimulus change for subsequent selection (or re-selection) and stand against the more specific suggestion that dopaminergic activity is solely related to the prediction of reward.

Highlights

  • Theoretical interest in the role of dopamine (DA) in learning has focused primarily on its role in reward processing, the suggestion that the burst firing of midbrain dopamine neurons acts as, or reflects, error in the prediction of reward within the circuitry associated with predictive learning (Montague et al, 1996; Schultz, 1998)

  • Experiment 2 Given the results of Experiment 1, Experiment 2 was conducted to assess the effects of amphetamine against an inactive lever, which is the standard control condition employed in conditioned reinforcement experiments (Robbins et al, 1989)

  • Performance on an inactive lever has been found to be unaffected by amphetamine administration, ruling out general motor effects of amphetamine as the source of its enhancement, and we felt it important to see if we could replicate this effect before moving on to compare the inactive lever with the SÀ control in Experiment 3

Read more

Summary

Introduction

Theoretical interest in the role of dopamine (DA) in learning has focused primarily on its role in reward processing, the suggestion that the burst firing of midbrain dopamine neurons acts as, or reflects, error in the prediction of reward within the circuitry associated with predictive learning (Montague et al, 1996; Schultz, 1998). Redgrave and Gurney (Redgrave and Gurney, 2006) have recently advanced an alternative ‘‘re-selection’’ hypothesis according to which the phasic DA signal acts to promote the re-selection or repetition of those actions/ movements that immediately precede unpredicted events, irrespective of their immediate reward value. In support of this view they point out that (i) the rapidity of the dopaminergic response precludes the identity of the unpredicted activating event from entering any concomitant processing of error signals and (ii) the promiscuous nature of the dopamine response stands discordant with the idea that the circuitry is focused solely on reward processing. Given that sensory events reliably evoke burst firing in dopaminergic cells across both stimulus-type and species (Ljungberg et al, 1992; Steinfels et al, 1983), this is compatible with a general role of dopamine in the acquisition of these neutral, stimulus-supported associations

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.