Abstract

Assessing the validity of a model is essential for its credibility especially when the model is used as decision making tool. Complex dynamic fishery models are recommended to investigate the functioning of fisheries and to assess the impact of management strategies, particularly spatial fishing regulations. However, their use is limited due to the difficulty and computational cost of parameterizing and gaining confidence, particularly for parameter rich models. These difficulties are compounded by uncertainty regarding parameter values, many of which are often taken from literature or estimated indirectly. Here we propose a methodology to improve confidence and understanding in the model, easily transferable to any complex model. The approach combines sensitivity analysis, scalability of parameters, optimization procedures, and model skill assessment in order to parameterize, validate and achieve the most plausible formulation of a model given the available knowledge while reducing the computational load. The methodology relies on five steps: (1) sensitivity analysis, (2) classification of parameters into a hierarchy according to their sensitivity and the nature of their uncertainty, (3) building of alternative formulations, (4) calibration and (5) skill evaluation. The approach is illustrated here by reviewing the parameterization of the ISIS-Fish model of the anchovy fishery in the Bay of Biscay. By using this approach, it is possible to make a thorough assessment of lacking information (e.g. accessibility to fishing and adult mortality) and to identify the strengths and weaknesses of the model in the context of different hypotheses. When applied to the ISIS-Fish model, the results suggest higher egg and adult mortality than formerly estimated, as well as new estimates for the migration towards spawning areas. They show the reliability of the model in terms of correlations with observations and the need for further efforts to model purse seiner catches. The methodology proved to be a cost-efficient tool for objectively assessing applied model validity in cases where parameter values are a mix of literature, expert opinion and calibration.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.