Abstract
TWO years ago, Freedman, Aykan, and Martin (2001) reported that the percentage of older Americans with severe cognitive impairment had declined significantly from 6.1% in 1993 to 3.6% in 1998. These apparent improvements were not explained by changes in demographic and socioeconomic composition or the prevalence of stroke, vision, or hearing impairments. We concluded cautiously that older persons appeared to have better cognitive functioning at the end of the decade. In a companion piece (Freedman, Aykan, & Martin, 2002), we showed that these results were robust to a variety of assumptions about missing data, loss to follow-up, and the institutional population, and called for replication with future waves of the Health and Retirement Study (HRS). In today’s issue of the Journal, Rodgers, Ofstedal, and Herzog (2003) provide this much-needed analysis of additional HRS waves. They conclude, quite the opposite—that there appears to have been little improvement—and call for future studies based on other data sources. Like so many other scientific matters of import, the question of late-life cognitive trends is not easily answered and researchers have called for additional investigation. Yet we are left to wonder: Will analyses of additional data sources bring us closer to the truth or simply add to the confusion? These three analyses of the HRS join a small but growing set of studies of national trends in late-life cognition. Rodgers and colleagues (2003) refer to several studies that draw upon the National Long Term Care Survey, which points to declines in dementia among the chronically disabled from the early 1980s through mid-1990s. In addition, using the 1986 and 1993 National Mortality Followback Studies, Liao, McGee, Cao, and Cooper (2000) showed significant declines in cognitive dysfunction in the last year of life for men aged 65–84 years and women aged 65 years and older. In sum, thus far two surveys suggest improvements, and a third provides inconsistent evidence that appears sensitive to analytic decisions. As in the disability-trend literature, analysis of additional waves and data sources (such as the National Health Interview Survey, which recently added cognition measures) does not guarantee that the ‘‘truth’’ will be forthcoming. During the last decade, for example, inconsistencies have emerged regarding trends in limitations in activities of daily living (ADLs; Freedman, Martin, & Schoeni, 2002). The seven surveys that have provided relevant information offer various strengths and weaknesses and differ in their definitions of disability, the age groups covered, time periods, modes of data collection, rules about proxy involvement, inclusion of institutional population, and effectiveness in minimizing loss to follow-up and nonresponse. A similar set of variations exist for the four national surveys that now include questions on cognition. These varied survey design features are compounded by the distinct analytic decisions made by authors about nonresponse and coding of measures. Rodgers and colleagues (2003), for example, cite several analytic differences between their study and the studies by Freedman and colleagues (2001, 2002)—the treatment of sampled persons for whom proxies responded, the imputation methods for missing data, and the use of different cutoffs to identify those with severe impairment. A fourth key difference that appears especially important in the analysis by Rodgers and colleagues—controlling for prior testing—could not be incorporated into the two-wave comparisons (Freedman et al., 2001, 2002). Without additional sensitivity analyses, it is impossible to sort out how these analytic decisions contribute to the inconsistencies. Moreover, measuring cognitive functioning may be even more challenging than measuring limitations in ADLs or related underlying physical limitations for three reasons. First, cognitive impairment is a complex, multidimensional concept encompassing disruptions of memory, language function, motor activity execution, object recognition, abstract thinking, information processing, spatial orientation, and judgment. Yet these concepts are not easily separated or measured, and despite several reliable clinical screens for dementia, no universally accepted survey instrument has emerged. The wide variation in measurement approaches—from proxy-assessments of memory to tests of recall for those able to participate in a survey—will inevitably complicate comparisons. Second, the mere presence of cognitive impairment increases the need for a proxy. On the basis of the analysis by Freedman and colleagues (2001), for Journal of Gerontology: SOCIAL SCIENCES Copyright 2003 by The Gerontological Society of America 2003, Vol. 58B, No. 6, S347–S349
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: The journals of gerontology. Series B, Psychological sciences and social sciences
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.