Behavioural ecology has been remarkably successful as a discipline, in part because much of the theory that has been developed is readily testable through manipulative experiments in the laboratory and the ®eld. Although the preferred currency for most theories is ®tness, it has usually been possible to incorporate modi®cations or simplifying assumptions that allow the use of more readily observed, surrogate measures of ®tness. Admittedly this does require some compromise, but in the grand tradition of behavioural ecology this trade-o is easy to solve: the bene®ts outweigh the costs. Moody et al. (1996) (henceforth referred to as MHM), explored the role that risk dilution will have on the results of experiments that investigate predation risk ± foraging trade-os. Assuming that the probability of being killed by a predator is inversely related to the number of animals feeding at a site, they developed a model that incorporates risk dilution into ideal free distributions (IFDs) under predation risk. Due to risk dilution, their model suggests that patch choice decisions will be aected not only by the presence of predators and relative food availability, but also by the total number of animals. Their suggestions are very useful in advancing our theoretical understanding of this problem. However, we disagree with their critical assessment of the use of the IFDs as an experimental tool to quantify decisions involving con icting demands. In 1989, Abrahams and Dill (henceforth referred to as AD) published a paper that described an experimental technique that could be used to quantify the energetic equivalence of the risk of predation and, more importantly, presented the results of experiments that tested its validity. The approach is based upon a continuous-input IFD. With this type of IFD, when food is the only parameter that describes patch quality, the spatial distribution of the foragers will match the spatial distribution of their food. A large number of authors have demonstrated that under experimental conditions, animals closely conform to an IFD (for a review see Milinski and Parker 1991). Incorporation of an additional patch parameter (i.e. the risk of predation) should result in a change in the spatial distribution of animals that re ects the relative change in patch quality. Dierences in individual intake rates will then provide a measure of the assessed value of this parameter from the animals' perspective, and can be determined empirically. This approach describes our ®rst experiment. Our second experiment then tested a prediction (derived from the results of the ®rst experiment) concerning how much additional food must be added to the dangerous patch in order for the foragers to consider it equivalent to the safe patch. To make this prediction, we assumed a linear relationship between energy and ®tness. This was not intended to describe the actual relationship between these two parameters, but to make it possible to interpret deviations in the spatial distribution with respect to the relationship between energy and ®tness. For example, if more animals than we predicted used the dangerous location after the addition of extra food, then we concluded that the ®tness bene®ts of the patch increased by more than the manipulated energy bene®ts. Conversely, if the addition of extra food resulted in fewer animals than predicted using the dangerous location, then we concluded that the ®tness bene®ts of the patch increased by less than the manipulated energy bene®ts. These experiments demonstrated that even with the linearity assumption, we could fairly accurately predict the amount of additional food that must be added to a dangerous patch to oset the risk of predation. MHM were critical of this approach for three reasons: (1) our calculations did not incorporate risk dilution; (2) the relationship between ®tness and energy should have no in uence on the calculations necessary to Behav Ecol Sociobiol (1998) 44: 147±148 O Springer-Verlag 1998