Abstract

BackgroundIt is common to use the mid-point between the latest-negative and earliest-positive test dates as the date of the infection event. However, the accuracy of the mid-point method has yet to be systematically quantified for incidence studies once participants start to miss their scheduled test dates.MethodsWe used a simulation-based approach to generate an infectious disease epidemic for an incidence cohort with a high (80–100%), moderate (60–79.9%), low (40–59.9%) and poor (30–39.9%) testing rate. Next, we imputed a mid-point and random-point value between the participant’s latest-negative and earliest-positive test dates. We then compared the incidence rate derived from these imputed values with the true incidence rate generated from the simulation model.ResultsThe mid-point incidence rate estimates erroneously declined towards the end of the observation period once the testing rate dropped below 80%. This decline was in error of approximately 9%, 27% and 41% for a moderate, low and poor testing rate, respectively. The random-point method did not introduce any systematic bias in the incidence rate estimate, even for testing rates as low as 30%.ConclusionsThe mid-point assumption of the infection date is unjustified and should not be used to calculate the incidence rate once participants start to miss the scheduled test dates. Under these conditions, we show an artefactual decline in the incidence rate towards the end of the observation period. Alternatively, the single random-point method is straightforward to implement and produces estimates very close to the true incidence rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call