Abstract

Recent discussions of model selection and multimodel inference highlight a general challenge for researchers: how to convey the explanatory content of a hypothesized model or set of competing models clearly. The advice from statisticians for scientists employing multimodel inference is to develop a well-thought-out set of candidate models for comparison, though precise instructions for how to do that are typically not given. A coherent body of knowledge, which falls under the general term causal analysis, now exists for examining the explanatory scientific content of candidate models. Much of the literature on causal analysis has been recently developed, and we suspect may not be familiar to many ecologists. This body of knowledge comprises a set of graphical tools and axiomatic principles to support scientists in their endeavors to create "well-formed hypotheses," as statisticians are asking them to do. Causal analysis is complementary to methods such as structural equation modeling, which provides the means for evaluation of proposed hypotheses against data. In this paper, we summarize and illustrate a set of principles that can guide scientists in their quest to develop explanatory hypotheses for evaluation. The principles presented in this paper have the capacity to close the communication gap between statisticians, who urge scientists to develop well-thought-out coherent models, and scientists, who would like some practical advice for exactly how to do that.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call