In 2001, the Institute for Healthcare Improvement (IHI) identified six aims for changing the health care system, calling specifically for safe, effective, patient-centered, timely, efficient, and equitable care (Committee on Quality of Health Care in America, 2001). A decade later, another IHI report reiterated this need, stating that ‘‘we need advances in health care delivery to match the advances in medical science so the benefits of that science may reach everyone equally’’ (para. 5; Committee on Quality of Health Care in America, 2011). In no discipline does the IHI’s call to arms resonate more clearly than within the field of pediatric psychology. As pediatric psychologists, we understand that each patient exists within a unique context. In our clinical interactions, we strive to identify, appreciate, and address this individual context in delivering interventions. Yet, as eloquently put by Cohen, Feinstein, Masuda, and Vowles (2014) in this issue, the ‘‘psychology research literature is dominated by group-aggregate data, which provide the predominate evidence-base to inform our work with individual patients’’ (p. 124). Not only does group-aggregate data typically focus on the mean-level, or average, experience of the group, but the designs that give rise to these data often are tightly constrained to minimize variation within the sample, thus bearing little relationship to the experience of seeing an actual patient within a clinical setting. Therein lies the widely acknowledged science–practice gap. The IHI’s call to arms is particularly relevant in light of this special issue on quantitative methods in pediatric psychology, as many of the methods highlighted within this collection of papers focus on understanding and embracing individual variation within a population to propel the field forward in a more unified way. Some, like Youngstrom’s (2014) example of receiver operating characteristic (ROC) curve utility in the development of clinical screening tools, have a clear and direct application to clinical practice. Others, like Cushing’s, Walters’, and Hoffman’s (2014) discussion of aggregated N-of-1 randomized controlled trials, showcase strategies for increasing external validity or generalizability to a clinical population while maintaining scientific rigor. Finally, papers such as the two-part introduction to latent variable mixture modeling offered by Berlin and colleagues preserve the traditional group-aggregate approach, but with an eye toward using naturally occurring variation among individuals to understand more about the population of interest and how relationships change based on variables that might previously have been considered extraneous or even statistical noise to be avoided (Berlin, Parra, & Williams, 2014; Berlin, Williams, & Parra, 2014). In our own work on chronic abdominal pain, we have found that focusing on individual variation in research design and analysis has allowed for identification of clinically meaningful relationships and subgroups that may help direct more targeted and efficient treatment (see Schurman et al., 2008, and Schurman, Kessler, Anderson, & Gu, 2013, for relevant examples). The unifying thread in this collection of papers is the recognition that individual variation is not the enemy, but instead an asset to be considered in the design and analysis of our scientific work. The timing of this message is right in a variety of ways. As suggested by Karazsia, Berlin, Armstrong, Janicke, and Darling (2014), the maturation of our field demands that we look beyond the simple question of ‘‘What works?’’ to ‘‘How, why, when, and for whom does it work?’’ Mobile, wireless, and even automated data collection methods allow for rich data capture with minimal participant burden to support more complex, person-centered analyses. Technological advancements have placed many
Read full abstract