Abstract

The risk allocation hypothesis has inspired numerous studies seeking to understand how temporal variation in predation risk affects prey foraging behavior, but there has been debate about its generality and causes. I examined how imperfect information affects its predictions and sought to clarify the causes of the predicted patterns. I first confirmed that my modeling approach-given a threshold or linear fitness function-produced the risk allocation prediction that prey increase their foraging efforts during low and high risk as the proportion of high-risk periods increases. However, the causes of this result and its robustness differed for the two fitness functions. When prey that had evolved to use perfect information received imperfect information, risk allocation was reduced. However, prey that evolved to use imperfect information in some cases reversed the risk allocation prediction. The model also showed that risk allocation occurs even when prey have no knowledge that the proportions of low- and high-risk periods have changed. I conclude that risk allocation is largely not driven by prey expectations about future states of the environment but rather by the prey's current energetic state and time remaining. I discuss the consequences for experimental design and explanations for empirical results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call