Abstract

The assessment of relative model performance using information criteria like AIC and BIC has become routine among functional-response studies, reflecting trends in the broader ecological literature. Such information criteria allow comparison across diverse models because they penalize each model's fit by its parametric complexity—in terms of their number of free parameters—which allows simpler models to outperform similarly fitting models of higher parametric complexity. However, criteria like AIC and BIC do not consider an additional form of model complexity, referred to as geometric complexity, which relates specifically to the mathematical form of the model. Models of equivalent parametric complexity can differ in their geometric complexity and thereby in their ability to flexibly fit data. Here we use the Fisher Information Approximation to compare, explain, and contextualize how geometric complexity varies across a large compilation of single-prey functional-response models—including prey-, ratio-, and predator-dependent formulations—reflecting varying apparent degrees and forms of non-linearity. Because a model's geometric complexity varies with the data's underlying experimental design, we also sought to determine which designs are best at leveling the playing field among functional-response models. Our analyses illustrate (1) the large differences in geometric complexity that exist among functional-response models, (2) there is no experimental design that can minimize these differences across all models, and (3) even the qualitative nature by which some models are more or less flexible than others is reversed by changes in experimental design. Failure to appreciate model flexibility in the empirical evaluation of functional-response models may therefore lead to biased inferences for predator–prey ecology, particularly at low experimental sample sizes where its impact is strongest. We conclude by discussing the statistical and epistemological challenges that model flexibility poses for the study of functional responses as it relates to the attainment of biological truth and predictive ability.

Highlights

  • Seek simplicity and distrust it. Whitehead (1919) Alfred North Whitehead, The Concept of Nature, 1919.The literature contains thousands of functional-response experiments (DeLong and Uiterwaal, 2018), each seeking to determine the relationship between a given predator’s feeding rate and its prey’s abundance

  • Doing so for an encompassing set of functional-response models across experimental designs varying in prey and predator abundances, we find that geometric complexity regularly differs substantially among models of the same parametric complexity, that differences between some models can be reversed by changes to an experiment’s design, and that no experimental design can minimize differences across all models

  • Several syntheses evidence that there is no single model that can characterize predator functional responses in general (Skalski and Gilliam, 2001; Novak and Stouffer, 2021; Stouffer and Novak, 2021). This is consistent with the fact that, to a large degree, the statistical models of the functional-response literature characterize aspects of predator-prey biology for which there is evidence in data, not whether specific mechanisms do or do not occur in nature

Read more

Summary

Introduction

Seek simplicity and distrust it. Whitehead (1919) Alfred North Whitehead, The Concept of Nature, 1919.The literature contains thousands of functional-response experiments (DeLong and Uiterwaal, 2018), each seeking to determine the relationship between a given predator’s feeding rate and its prey’s abundance. Information-theoretic model comparison criteria like the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) have rapidly become the preeminent tool for satisfying this desire in a principled and quantitative manner (Okuyama, 2013), mirroring their increasing ubiquity across the ecological literature as a whole (Ellison, 2004; Johnson and Omland, 2004; Aho et al, 2014). Criteria like AIC and BIC make the comparison of model performance an unbiased and equitable process. By the principle of parsimony or because such increases in fit typically come at the cost of generality beyond the focal dataset, model performance is judged by the balance of fit and complexity when other reasons to disqualify a model do not apply (Burnham and Anderson, 2002; Höge et al, 2018; but see Evans et al, 2013; Coelho et al, 2019)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call