For infections that are typically asymptomatic, targeted surveillance systems (whereby individuals at increased risk are tested more frequently) will detect infections earlier on average than systems with random testing or in systems where all individuals are tested at the same intervals. However, estimating temporal trends in infection risk using data from such targeted surveillance systems can be challenging. This is similarly a problem for targeted surveillance to detect faults of individual industrial components. The incidence of bovine tuberculosis (TB) in British cattle has been generally increasing in the last thirty years. Cattle herds are routinely tested for evidence of exposure to the aetiological bacteria Mycobacterium bovis, in a targeted surveillance programme in which the testing interval is determined by past local TB incidence and local veterinary discretion. The UK Department for Environment, Food and Rural Affairs (Defra) report the monthly percentage of tests on officially TB-free (OTF) herds resulting in a confirmed positive test for M. bovis (i.e. the percentage of tested herds with OTF status withdrawn), which contains substantial fluctuations (three years apart) within the increasing trend. As the number of herds tested changes over time, this cyclic trend is difficult to interpret. Here we evaluate an alternative to the Defra method in which we distribute each incident event across the period at risk to infer the underlying trends in infection incidence using a stochastic model of cattle herd incidence and testing frequencies fitted to data on the monthly number of herds tested and number of these with OTF status withdrawn in 2003–2010. We show that for an increasing underlying incidence trend, the current Defra approach can produce artefactual fluctuations whereas the alternative method described provides more accurate descriptions of the underlying risks over time.
Read full abstract