Abstract

Summary The standard spatial capture–recapture design for sampling animal populations uses a fixed array of detectors, each operated for the same time. However, methods are needed to deal with the unbalanced data that may result from unevenness of effort due to logistical constraints, partial equipment failure or pooling of data for analysis. We describe adjustments for varying effort for three types of data each with a different probability distribution for the number of observations per individual per detector per sampling occasion. A linear adjustment to the expected count is appropriate for Poisson‐distributed counts (e.g. faeces per searched quadrat). A linear adjustment on the hazard scale is appropriate for binary (Bernoulli‐distributed) observations at either traps or binary proximity detectors (e.g. automatic cameras). Data pooled from varying numbers of binary detectors have a binomial distribution; adjustment is achieved by varying the size parameter of the binomial. We compared a hazard‐based adjustment to a more conventional covariate approach in simulations of one temporal and one spatial scenario for varying effort. The hazard‐based approach was the more parsimonious and appeared more resistant to bias and confounding. We analysed a dataset comprising DNA identifications of female grizzly bears Ursus arctos sampled asynchronously with hair snares in British Columbia in 2007. Adjustment for variation in sampling interval had negligible effect on density estimates, but unmasked an apparent decline in detection probability over the season. Duration‐dependent decay in sample quality is an alternative explanation for the decline that could be included in future models. Allowing for known variation in effort ensures that estimates of detection probability relate to a consistent unit of effort and improves the fit of detection models. Failure to account for varying effort may result in confounding between effort and density variation in time or space. Adjustment for effort allows rigorous analysis of unbalanced data with little extra cost in terms of precision or processing time. We suggest it should become routine in capture–recapture analyses. The methods have been made available in the R package secr.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call