Abstract

Bayesian foraging in patchy environments requires that foragers have information about the distribution of resources among patches (prior information), either set by natural selection or learned from past experience. We test the hypothesis that bumblebee foragers can rapidly learn prior information from past experience in two very different experimental environments. In the high‐variance environment (patches of low and high quality), stochastic optimality models predicted that finding rewards should sometimes sharply increase an optimal forager’s tendency to stay in a patch (an incremental response), whereas in the uniform environment, finding rewards should always decrease the tendency to stay (a decremental response). We use Cox regression models to show that, in a matter of hours, bees learned to match both predicted responses, resulting in a reward intake rate that averaged 80% of the predicted maximum. Following training in either environment, bees’ adaptive behavior carried over to a common test environment, thus confirming the influence of memorized prior information. Although Bayesian foraging by learning is often presumed, this study is the first to clearly isolate the adaptive use of a learned prior expectation. More generally, it highlights the remarkable adaptive plasticity of an important generalist pollinator and agent of selection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.