Abstract

Unlike other cardiovascular risk factors, current standards for cardiorespiratory fitness (CRF) depend on age. Whether age-specific or age-independent CRF standards are better indicators of health outcomes is unclear. PURPOSE To compare the predictive accuracy of age-specific and age-independent CRF standards on all-cause mortality rates. METHODS We studied 23,530 men ages 40–84 y (mean ± SD, 49 ± 7 y) who performed a symptom-limited maximal treadmill exercise test using a modified Balke protocol as part of an initial preventive medical exam at the Cooper Clinic in Dallas, Texas from 1970 to 1996. The mean CRF was 11.2 ±2.1 METs. Men were dually classified (1) by quintiles of maximal treadmill time for their own age group (40–49, 50–59, or 60–84 y); and (2) by quintiles of maximal treadmill time for men aged 20–39 y, independent of age. In each classification, men in the least-fit category were regarded as low-fit. Mortality surveillance through 1996 was completed primarily from the National Death Index and identified 1303 decedents in 282,121 person years of follow-up (mean, 12.0 ± 6.7 y). Cox proportional-hazards regression was used to estimate the comparative effect of each fitness classification on mortality rates, after adjustment for age, health status, smoking habit, and examination year. RESULTS About 58% of men and 56% of decedents were classified into a less fit category by age 20–39 standards than by their own age-group standards. The mortality hazard ratio (HR) for low-fitness was 1.8 by age-specific standards and was 1.7 by age-independent standards (P <0.001 each). The age-specific standard explained more variation in adjusted mortality rates than the age-independent standard (likelihood ratio X2 = 108.6 vs. 67.4 on 4 df). The SEs of HR estimates were generally smaller for the age-specific than the age-independent standard (median SE = 0.045 vs. 0.068). The linear trend in HR across levels of either standard was the same (HR = 0.8 per level, P <0.001). Model deviance was modestly lower (AIC = 16,476 vs. 16,518), predictive discrimination was slightly better (c-index = 0.74 vs. 0.73), but hazard calibration was worse (goodness-of-fit X2 = 10.2 vs. 2.0 on 8 df) for the age-specific than the age-independent standard. Similar results were observed in each age group (40–49, 50–59, and 60–84 y). CONCLUSION Age-specific standards for CRF appear to be marginally better predictors of mortality than age-independent standards. Supported by NIA AG06945

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call