s / Annals of Epidemiology 22 (2012) 661e680 673 P55. Impact of Look-Back Periods on Misclassification C.D. O'Malley, R.I. Griffiths, R.J. Herbert, M.D. Danese. CfOR, Amgen, South San Francisco, CA; Outcomes Insights, Inc. Westlake Village, CA; Johns Hopkins Bloomberg School of Public Health, Baltimore, MD Purpose: Estimating disease incidence using claims data often requires constructing a look-back period to exclude those with prevalent disease. Those missed during the look-back period may be misclassified as incident cases (false positives). Using Medicare claims, we examined the impact of varying the look-back period lengths on the incidence of 12 conditions. Methods: Two cohorts of women were included: 33,731 diagnosed with breast cancer and 101,649 without cancer, all with 39 months of Medicare eligibility. Cancer patients were followed from 36 months before diagnosis (prevalence period) up to 3 months after diagnosis (incidence period). Noncancer patients were followed for up to 39 months after the beginning of Medicare eligibility with a sham date inserted after 36 months to separate the prevalence and incidence periods. Using 36 months as the gold standard, the look-back period was shortened in 6-month increments to examine the impact on false positives during the incidence period. Results: In the cancer cohort, > 20% of the total incident cases for 11/12 conditions were false positives using a 6 month period. Lengthening the look-back period from 6 to 12 months resulted in the greatest decline in false positives. The impact varied by condition; false positive rates for diabetes and liver disease were 57% and 22% using 6months, 27%, and 15%with 12months compared with 0% as the 36 month gold standard. Misclassification patterns were similar but lower in non-cancer patients. Conclusions: Shortening the look-back period to rule out pre-existing disease can substantially reduce the misclassification of incident disease. P56-S. Measurement Error in Survey Data on Income From a Publicly Funded Financial Credit F. Pega, K. Carter, T. Blakely. Department of Public Health, University of Otago, Wellington, New Zealand Purpose: Measurement errors in survey data reporting of income from publicly funded financial credits are a central methodological concern in research on the health impact of such credits. This study sought to quantify the measurement error in reported income from the Family Tax Credit (FTC), New Zealand's equivalent of the U.S. Earned Income Tax Credit. Methods: Seven waves of data (2002-2009) from Statistics New Zealand's Survey of Family, Income and Employment were extracted (N1⁄427,795). These data were restricted to a balanced panel of working-age adults in families with children aged under 13 years (N1⁄45,710). FTC receipt (any vs. no) and amount received were derived from survey data and, in a second step, estimated by applying eligibility and entitlement criteria. The reported FTC receipt and amount were compared with the estimated FTC receipt and amount, using cross-sectional tabular and correlation analyses. Results: The reported FTC receipt did not correspond with the estimated receipt for 12.1-12.4% of participants. Underreporting of FTC receipt was 10.410.8%, and overreporting was 34.5%. The reported and estimated FTC amounts correlated weakly (Pearson's r 1⁄4 0.17233). Conclusion: This study identified large measurement errors in survey data reporting of income from FTC, raising concerns for the validity of these data. Survey data on income from publicly funded financial credits should be scrutinized for measurement errors, and may not be fit for use in epidemiological studies.
Read full abstract