Spillane et al. (2007) outline in their JAN paper the methods they used to monitor (and presumably promote) intervention fidelity within a trial of a ‘complex’ intervention to encourage medication adherence and lifestyle change among heart disease patients. The study involved 48 general practices. The authors misquote our own work in complex, multi-site cluster-randomized trials. Given that the point is germane to their argument, we would like to correct this. We are cited as saying that trials of complex interventions ‘must strive to consistently implement the same intervention by standardizing its content and delivery’. Actually what we said was that something has to be standardized in a complex intervention, for the sake of both internal and external validity (Hawe et al. 2004), but it does not have to be the content of the intervention, nor the way it is delivered. In our paper, we lamented the tendency for investigators to feel that the form of an intervention had to remain the same in every site, mistakenly believing that this was an essential requirement of a randomized trial. We felt that this over-controlled the intervention. In some cases, standardizing interventions by form (e.g. using exactly the same patient education materials in every site) might be going too far in ‘treatment fidelity’, when tailoring to context might be more effective (e.g. allowing materials to be adapted to local cultural styles and literacy levels). In the latter case, the function of an intervention component would remain the same, but the form could be different in different sites. We argued that standardizing by the function that a component plays in an intervention, instead of standardizing by the form it takes, may be more appropriate in many complex interventions. Indeed, the more complex an intervention becomes, the more it is necessary to have rigorous theory about the process and principles of the change process being tested, but to be flexible about the form that this takes in each site. Anyone versed in fields like active listening or community development, for example, would appreciate that it is impossible to prescribe the exact form the intervention takes, but the process, principles and critical sequences ought to be recognizable and replicable. Thus, we would suggest that both the content and the delivery of Spillane’s SPHERE intervention could differ among the 24 interventions sites, but the integrity of the intervention could be fully preserved if the principles of training, delivery and enactment of skills (as they have specified) coincide with the theory the intervention is testing.
Read full abstract