Abstract

The a4a initiative aims to provide timely and cost effective advice for the circa. 250 fish stocks that, through the EU Data Collection Framework, will have at least 10 years of data by the year 2020. Current processes for assessing the state of and managing fish stocks are intensive processes, each stock requiring the attention of one or more stock assessment scientist to produce preliminary catch advice, which is subsequently reviewed by one or two committees before the final catch advice is published. Ingrained in the development of these processes has been the development of more and more complex stock assessment models which typically require highly skilled personnel to set up and run. The a4a initiative seeks to overcome these issues by developing a flexible, robust and easy to use stock assessment model, thus making stock assessment accessible to a wide range of scientists that do not have the high skilled quantitative background required to run very complex models. Forthcoming research will describe how to overcome the burden of producing catch advice for such a large number of stocks. This technical report presents a new stock assessment model along with a set of validatory tests developed under the a4a Initiative. Introduction The a4a Initiative is a visionary reseach initiative aiming to provide method to use the increasing amounts of data on fish stocks being collected under the Data Collection Framework (DCF). The implementation of the 2009 revision of the DCF 1 generated the obligation to collect a large amount of information for all stocks being subject to fisheries exploitation. Based on the regulation there are 250+ stocks for which some kind of biological information must be collected. Most of these stocks will have in thefuture, ∼2020, time series of exploitation data more than 10 years long, although the biological information will most likely be limited due to the high human resources requirements to process all the samples collected. These stocks (will) have a moderate amount of information and won’t fit into the data poor stock definition. In addition, due to the large number of these stocks, it is not logistically feasible to run on all of them complex data eager models that require a high level of expertise. What is required is a robust methodology that allows the assessments of a large number of stocks by stock assessment experts with distinct backgrounds. Estimating demographics and exploitation rates of fish stocks is the basis of current management advice across most of the world. Assessing the state of stocks has been developing over the last 50 to 100 years, and during this period many models have evolved and there are around 40 stock assessment methods in use today (SCISAM report). Some examples of age based methods in current use are XSA (Shepherd, 1992), ASM (NOAA Fisheries Toolbox), TSA (Fryer 1999, Gudmundson 1990) and SAM (Neilsen, 2008) and all are based on similar underlying assumptions regarding stock dynamics. The main differences between these methods is in how they consider the data and how the fishery dynamics are modelled, and of course the user interface. This report presents a framework which allows the construction of models that mimic many of the currently available age based stock assessment methods in a statistical setting, while also making available current statistical modelling techniques such as additive models (Wood, 2006) and structured random effects (Rue and Held, 2005). The model is a simple statistical catch at age model in which the population dynamics are simply that the numbers of fish in a cohort declines from year to year due to a combination of natural mortality and fishing mortality. We in effect observe the population through the catches removed by the fishery and more directly through a survey conducted at some point in the year. Where the complexity and diversity in stock assessment models arise is usually in how fishing mortality is modelled. Because it is not possible to estimate everything (the model parameters would be unidentifiable) it is nessisary to constrain it, and this can be done in many ways. Here, we propose the use of splines and random effects to provide a robust and efficient way to constrain the model, and this is packaged in a robust and user freindly statistical framework. The Report begins with a breif technical description of the model with two examples to demonstrate the application of the model to the North Sea plaice data set. Extensions are breifly discussed. The second half of the report is a presentation of extensive model testing and validation on simulated data sets. The simulation procedure is described and the data sets presented. Finally the model is fitted to each data set and a selection of fits are shown. 1Data Collection Framework (2008/949/EC) 1 Model Description The basis of the model is Na+1,t+1 = Nate −(Fat+Mat) predicted catches are Ĉat = Fat Fat +Mat ( 1 − e−(Fat+Mat) ) Nat and predicted survey indices are Iat = QaNate −δ(Fat+Mat) So the predictions are completely determined by: Rt = N1,t i.e. recruitment Aa = Na,1 i.e. initial age structure Fat i.e. Fishing mortality Qa i.e. catchability at age and we observe these through log Iat ∼ N ( log Iat, σa ) logCat ∼ N ( log Ĉat, τa ) We parameterise the model using linear models. i.e. logFat ∼ factor(age) + factor(year) or logFat ∼ s(age) + factor(year) These are examples of seperable F assumptions The function s(.) is a smooth function (stolen from the mgcv package in R) A simple example: seperable F Figure 1: A simple example: seperable F 2 logFat ∼ s(age, 4) + factor(year) logQa ∼ s(age, 4) A more complex example: Changing F pattern Figure 2: A more complex example: Changing F pattern logFat ∼ s(age, year, (4, 30)) logQa ∼ s(age, 4) Extensions We can introduce covariates through the formulas. Include tecnological creep in surveys. Add temperature data to recruitment. Model spikes in recruitment in terms of environmental covariates. we can allow changes in survey selectivity. by using 2d smooths useful perhaps for North Sea plaice surveys However, The model has the potential to be very complex: random effects (fixed variance) around log F, log Q, log R; All the complaints people have with gams exist here. But,we can package it to reduce the options available: • stable fishery (seperable F) • Changing fishery (F pattern can evolve) • Impose exponential survey selectivity • impose flat top selection Model Tests Here follows the simulation testing design: 1. Tests run on WKLIFE simulated stocks. 2. Stocks in 5 different exploitation status used: • developing • developing and stabilizing • stable at high exploitation • recovery 3 • full developing-stable-recovery 3. Data series 15 years long, except ”full” with 50 years. 4. Survey index with decreasing catchability, bottom trawl type, with 10% cv. Note that the survey index becomes very informative with abundance between all ages correlated. 5. Catch-at-age with 10% cv observation error. Inputs Developing F Recruits SSB Yield 0.0 0.5 1.0 1.5 2.0 0e+00 2e+06 4e+06 6e+06 8e+06 0 250 500 750 0 100 200 300 5 10 15 20 5 10 15 20 5 10 15 20 5 10 15 20 Year GHL−UUM had−7b−k had−iris her−31 her−nirs lem−nsea lin−comb in Subareas I&II lin−comb other areas meg−4a6a meg−rock mut−comb nep−25 nep−2627 nep−2829 nep−30 nep−31 ple−2232 ple−7b−c ple−7h−k ple−celt ple−eche ple−iris ple−kask pol−89a pol−celt pol−nsea rng−comb smn−arct smn−con Figure 3: Developing fishery. Stocks based on WKLIFE life history parameters list. Developing and stabilizing 4

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call