Abstract

Carvalho et al. (2021) provided a “cookbook” for implementing contemporary model diagnostics, which included convergence checks, examinations of fits to data, retrospective and hindcasting analyses, likelihood profiling, and model-free validation. However, it remains unclear whether these widely-used diagnostics exhibit consistent behavior in the presence of model misspecification, and whether there are trade-offs in diagnostic performance that the assessment community should consider. This illustrative study uses a statistical catch-at-age simulation framework to compare diagnostic performance across a spectrum of correctly specified and mis-specified assessment models that incorporate compositional, survey, and catch data. Results are used to contextualize how reliably common diagnostic tests perform given the degree and nature of known model issues, including parameter and model process misspecification, and combinations thereof, and trade-offs among model fits, prediction skill, and retrospective bias that analysts must consider as they evaluate diagnostic performance. A surprising number of mis-specified models were able to pass certain diagnostic tests, although there was a trend of more frequent failure with increased mis-specification for most diagnostic tests. Nearly all models that failed multiple tests were mis-specified, indicating the value of examining multiple diagnostics during model evaluation. Diagnostic performance was best (most sensitive) when recruitment variability was low and historical exploitation rates were high, likely due to the induction of better contrast in the data, particularly indices of abundance, under this scenario. These results suggest caution when using standalone diagnostic results as the basis for selecting a “best” assessment model, a set of models to include within an ensemble, or to inform model weighting. The discussion advises stock assessors to consider the interplay across multiple dynamics. Future work should evaluate how the resolution of the production function, quality and quantity of data time series, and exploitation history can influence diagnostic performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.