Abstract Estimates of stock size for a specific year change when a new year of data is added to a stock assessment model, and some assessments exhibit a unidirectional pattern to these retrospective differences. Assuming that the most recent stock assessment is the most reliable, retrospective patterns are often misinterpreted as a measure of estimation bias that can be corrected. The logical fallacy of this interpretation is exposed when another new year of data is added, and the estimates that were assumed to be true are now considered to be biased. True values of estimated parameters are needed to infer bias. For example, simulation-estimation experiments can produce retrospective patterns from misspecified estimation models that assume time-varying processes are stationary. These simulations show that retrospective patterns are not a reliable measure of bias, and retrospective adjustments may be further from true values. Therefore, the terminology of retrospective “bias” and “correction” is misleading. Retrospective patterns can be an informative diagnostic to identify and confront model misspecification, and if retrospective patterns cannot be reduced with respecified models, they can be communicated as measure of uncertainty for consideration in the precautionary management.
Read full abstract