Abstract

Human subjects were trained in the noncontingent binary choice situation under one of two levels of event probability (π = .6, .8) and payoff (0c, 1c). These data were used to evaluate the ability of the Reinforcement-Extinction (R-E) Model and the Weak-Strong (W-S) Conditioning Model to describe marginal and first-order response-event dependencies throughout learning and second-order response-event dependencies and run curves during the last 100 trials. Both models described asymptotic statistics about equally well with largest discrepancies being in the fit-to-run curves. Generally, poorer preasymptotic fits were attributed to too rapid a rate of approach of the predicted statistics to their asymptotic values. Parameter estimates for both models varied with π and payoff throughout learning. From these estimates it was inferred that learning occurs on correct and incorrect trials; correct trials appeared more important in the R-E model while incorrect trials assumed major importance for the W-S model, especially prior to overshooting.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.