Abstract

In a stochastic environment animals should and really do make their foraging decisions not only according to the expected net benefits but also according to the associated variances. In this study I show how the RPS (relative payoff sum)-learning rule ( Harley, 1981 ) deals with the problem of choosing between two food patches where the expected net benefits are equal but the associated variances differ. The RPS-learning rule is a stochastic rule in which the probability of choosing a patch is proportionate to the relative payoff that the respective patch has yielded so far. To account for a decay of memory in time the RPS-rule gives more weight to recent payoffs. By using two different simulation procedures I show that the RPS-learning rule is either indifferent between constant and variable rewards or develops a preference for constant over variable rewards depending on the foraging regime simulated. The RPS-learning rule therefore represents a mechanism to produce risk-neutral or risk-averse foraging preferences. It is discussed how this result could come about and what it means in relation to empirical findings in a number of animal species.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.