Although this special issue on treatment fidelity is extremely timely for the field of early childhood special education (ECSE), this issue comes somewhat late in the history of conversations about the importance of information about treatment implementation. In 1980, Billingsley, White, and Munson discussed the need for treatment fidelity reliability in single case design studies. Blase, Fixsen, and Phillips (1984) pointed out the need to discriminate implementation outcomes from effectiveness outcomes in single-case and group design studies. In the two decades that followed, Fixsen, Blase, and others argued that only when effective practices and programs are fully implemented should we expect positive outcomes (Bernfeld, 2001; Fixsen & Blase, 1993; Institute of Medicine- Committee on Quality of Health Care in America, 2001; Washington State Institute for Public Policy, 2002; cited in Fixsen, Naoom, Blase, Friedman, & Wallace, 2005). Discussion of treatment fidelity has become more frequent in the last decade, fueled by changing standards for methodological rigor in educational research and public concern for implementing evidence-based practices in school and community settings (e.g., Borrelli, 2011; Elliott & Mihalic, 2004; Kratochwill & Stoiber, 2002; Stockard, 2010).The current issue provides representative sample of where the field of ECSE stands in the evolving consideration of implementation fidelity. These six articles link emergent conceptual frameworks in treatment implementation fidelity to the training and coaching process used for professional development (PD) in ECSE and to measuring fidelity of interventions in literacy, behavior disorders, social emotional support, and family interventions. Two articles report specifically on the progress toward reporting implementation fidelity in the field of ECSE.Dunst, Trivette, and Raab (p. 85) introduce an implementation science framework and provide specifications for measuring fidelity in ECSE research. In introducing this framework, they propose the use of the term implementation to refer to a specific set of [professional development] activities designed to put into practice an [intervention] activity. They argue that the use of any type of early childhood intervention practice requires attention to not only the fidelity of the intervention practice but also the fidelity of the methods used to promote the use of that practice. They also point out the need for understanding the active ingredients of implementation practices and interventions. Essentially, measurement of fidelity should index the active ingredients of implementation and intervention; further, the determination of active ingredients should be linked to research-based evidence of effectiveness. Dunst et al. provide conceptual model and analytical framework for linking variations in implementation fidelity, variations in the intervention as delivered, and practice outcomes for children and families. They illustrate the application of this model using data from Head Start research project in which teachers were taught to implement early literacy practices.Powell and Diamond's (p. 102) argument for the application of implementation science principles in examining PD in experimental studies of language and literacy instruction for preschoolers follows the logic offered by Dunst et al. Powell and Diamond propose that critical link in the PD pathway, the individualized coaching provided to teachers as means of training them to use new intervention procedure, is largely black box. They use their conceptual framework for coaching (Powell & Diamond, 2013) that emphasizes structure, content, and process to describe the impact of coaching on teachers' implementation of evidence-based literacy practices. The procedures for coaching were derived from series of iterative experimental PD studies. In this issue, they report data from hybrid coaching-based PD program offered to Head Start teachers. …
Read full abstract