Abstract

Many decisions and assessments made by fisheries managers and researchers use estimates. The confidence stakeholders have in these decisions is greater when those estimates are accurate and precise. Complex statistical models are often used in fisheries management and research to improve these estimates. The models usually assume the underlying data are distributed according to some theoretical distribution (e.g. Poisson, gamma) but in reality fishery data usually only approximate theoretical distributions, breaching them to varying degrees. If the models are not sufficiently robust, these breaches can produce biased and/or imprecise estimates leading to excessive Type I and Type II errors, both of which can lead to poor decisions. We examined the robustness of seven models used in fisheries research to varying degrees of breaches in their distribution assumptions. Using six different zero-inflated gamma and Poisson distributions and three different sample sizes we examined the mean bias, confidence interval width and actual Type I error rate (as opposed to the modeled α of 0.05) of these models by comparing the estimates to the known population parameters. We found that the more complex models tended to be less robust to breaches of their distribution assumptions than the simpler normal model (sample mean). We recommend that the robustness of a chosen statistical model be assessed a priori to provide stakeholders with some confidence in the accuracy and precision of the estimates and we present a simple iterative method to do this.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call