Abstract

Good nutrition is fundamental to health—there is no debate there. To improve the health outcomes of populations, we know that behaviour change is necessary, especially in those who are at risk of developing chronic illness. The burden of non-communicable disease continues to weigh heavily on those who can least afford it; recent prevalence figures for diabetes show it quadrupling globally1 and that age-standardised obesity accounts for 11% men and 15% women worldwide.2 Dietitians are identified as the health professionals responsible for delivering effective, evidence-based nutrition care within public health and health-care settings.3 The key component within that care is evidence-based decision making, which is underpinned by the synthesis of the best available scientific research into guidelines for practice. However, it is at this stage that we start to see cracks appearing in the process, especially when we look at the evidence relating to changing the behaviours of health professionals.4, 5 Just like following a recipe, when we know more about the quality of the ingredients, whether ingredients can be substituted, the setting where the recipe is designed for use and whether the recipe needs to be adapted for individual factors, the outcomes will be better. The field of translational research or implementation science arose out of this need for greater clarity around the contextual and functional components influencing broader practice and aims to yield the end product we seek, better health in the individuals and populations we work with. Implementation science is a relative newcomer on the science scene and is still developing out of that infancy. Recognition of ‘what it is all about’ is varied, which is compounded by the confusing array of names used in the field—for example, knowledge translation, research translation, translational research, implementation science, research utilisation and dissemination science. Implementation science has been defined as the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice to improve the quality and effectiveness of health services and care.6 In the earlier days, the field was labelled ‘an expensive version of trial-and-error’7 because of its lack of theoretical basis. When theoretical underpinnings are lacking, it becomes difficult to identify the factors that predict the chance of successful implementation and create improved strategies for better outcomes. The past decade has seen an exponential growth in implementation science theory as health systems and decision makers seek to implement change in a scientific and rigorous manner, which requires scaffolding, and theory can provide this. The theoretical approaches used in implementation science have three main functions: providing process information, explaining what influences outcomes and evaluating implementation outcomes.8 Figure 1 identifies these functions and the categories, including the types of frameworks, models and theories, sitting under them. It is estimated to take close to 24 years to translate a laboratory discovery into routine medical practice.9 We also know that the effects seen within tightly controlled intervention studies are rarely reproduced in the real world.10, 11 Implementation science seeks to build a bridge over the gap between evidence and practice—the so-called ‘valley of death’.12 Systematic reviews are seen as the cornerstone of knowledge translation as they enable the cumulative effect of individual studies to be explored and active elements identified. The growth in this area is such that prospective registration of systematic review protocols—through PROSPERO—is now required.13 This step was taken to both improve systematic review standards and reduce the duplication of effort, which detracts from the usefulness of reviews. The PRISMA standard14 for reporting systematic reviews and meta-analysis now encompasses aspects of quality, quantity and ranking of individual research studies, which can improve guideline production and better inform practice. Challenges exist in using systematic reviews for nutrition research; the methods for gathering and evaluating evidence come from the medical model where randomised controlled trials (RCTs) are the top level of evidence, and this does not necessarily fit with the types of evidence needed to change nutrition practice.15 For example, it is rarely possible to blind people with whole foods or diets; it is unethical to remove or restrict essential nutrients/core foods from diets, and there are very few RCTs conducted for long enough to assess whether long-term outcomes in health and diet have occurred. In addition, RCTs demand a high level of internal validity—normally at the expense of external validity, which is what implementation science is primarily concerned with.16-19 In looking at the top 50 most highly cited nutrition publications and how trends have changed over time, Lo et al. shine a light on how our field is moving towards a greater emphasis on evidence for use in practice.20 The shift towards acceptance and uptake of systematic reviews is reassuring, but there was, disappointingly, a lack of guidelines or directly practice-relevant publications within the list—only three out of 50.20 This finding can be taken one of two ways; either we are still not very good at incorporating evidence into practice, or the evidence within the majority of our nutrition publications are not sufficiently practice-informed to be useful and therefore cited. Researchers involved in knowledge translation would argue the former to be true and state that if we want more evidence-based practice, then we need more practice-based evidence.21 The systematic review by Lawlis, Knox and Jamieson on the policy, perceptions and use of school canteens in Australia emphasises the importance of the evidence base and the need for higher quality research to bring about informed decision making.22 Their search identified 2741 studies, yet only 12 met the inclusion criteria; they noted the paucity of qualitative work, which ultimately limited the potential reach of the review process. The outcome of the review points towards low levels of compliance with healthy canteen policies, child preference for non-core food choices and a lack of guideline adherence within the Australian setting, all of which are useful when trying to understand the extent of the gap between evidence and practice and the type of ‘bridge’ practitioners need to construct to span that divide. However, we are still left needing more practice-based research if we are to truly link the ever-widening gap between evidence with improved care. Guidelines are an essential tool for scaling-up a specific practice into routine care across a population. They are designed to inform practitioners on how to implement best practice, but implicit in this is the need for the guideline to be usable within everyday routine care. The evolution of guideline production methods using more transparent and user-centred approaches like GRADE23 and NICE24 have improved the quality of guidelines, but we still see issues around their interpretation in practice. Common problems are disconnects between a guideline recommendation and what is feasible to deliver,25 a lack of clarity around how practitioners should interpret guidelines when comorbidity exists26 and selecting an optimal guideline when more than one exists for the same condition.27 Palmer et al. provide an example of a guideline implementation issue in their examination of the refeeding guidelines within a cohort of Australian and New Zealand dietitians.28 Two hundred and ninety-nine eligible dietitians evaluated 13 case studies for their level of refeeding syndrome risk within an online survey. Reassuringly, the levels of refeeding syndrome risk identified were generally consistent with the NICE guidelines, and over half of the respondents had read refeeding syndrome guidelines within the two months prior to taking the survey. However, the participants highlighted the need for more evidence that would assist in applying the guideline into busy clinical practice—where all the necessary information may not be available at the time when a practitioner needs to make a decision. This underscores the importance of the implementation component within guidelines, rather than just the basic clinical issue being targeted. If we are to take guidelines forward into our everyday practice, we need to look seriously at how we address barriers to professional and organisational behaviour change—not simply the client's. This more holistic change management process requires us to broaden our perspective on what needs to change and how each change will impact the ecosystem it is situated within. It is here that applying the implementation theory can help move change forward. Process models, like the Knowledge to Action (K2A) Framework29 and Quality Implementation Framework,30 can be used to guide the steps taken. Frameworks like Promoting Action on Research Implementation in Health Services (PARIHS),31 Consolidated Framework for Implementation Research (CFIR)32 and Theoretical Domains Framework,33 alongside theories like Diffusion of Innovation34 and Normalization Process Theory,35 can help with determinants that influence implementation outcomes. Implementation evaluation frameworks such as RE-AIM36 and PIPE37 can be used to specify the aspects of implementation suited for evaluation. Following these tools will not guarantee success, but they do take us closer to unpacking the black box on ‘what works’. Wilkinson et al. applied an implementation science approach to take the delivery of a guideline-led gestational diabetes nutrition model of care into practice at a single Queensland maternal health service provider.38 The implementation plan used a sound implementation science theoretical basis—the Theoretical Domains Framework39 and the Behaviour Change Wheel33—to identify the areas for behaviour change across the setting and the most effective means of targeting them. The intervention included staff training sessions, clinical pathway development, staffing the new care schedule, client information sheets development, local champion identification, audit–feedback process implementation and obtaining clinic space to deliver the improved care scheduled appointments. In taking this approach, they were able to see some clinically relevant changes in medication requirements, particularly in women who received best practice care, but these results need to be interpreted cautiously because of a lack of study power. The study also draws attention to the impact of the service or organisational constraints on effective implementation. The securing of a room within the clinic space to deliver the appointments was the biggest barrier to implementation within the study, yet from an evidence-based practice or guideline-led care perspective, this physical environment factor would not have even featured. The planned dissemination and study of this model of care within more Queensland Health services38 will assist in refining those implementation elements and provide us with a much richer answer to ensure that knowledge translation can occur more effectively. The care of individuals and populations is a complex endeavour; it requires skill and situational judgement built from experience in addition to evidence on ‘what works’. This is, in essence, what implementation science is. By thinking about and answering the implementation question ‘who needs to do what differently, where and to whom?’, we are likely to have adapted the evidence and guidelines for their practice context. This framing of research activity is what we need more of—research where the application is front and centre of the whole process. We cannot continue to rely on the medical model of research where the RCT is the pinnacle of research enquiry because it is unable to answer the challenges that implementation presents. A modified research toolkit is needed, where dietitians are comfortable working with methods such as quasi-experimental designs, like interrupted time-series and waitlist crossovers; randomised designs, like stepped wedge and randomised encouragement trials; mixed methods; health economics; and qualitative approaches.40 This skill set will take time to develop; it requires workforce training and modification of the university research training curriculum. Implementation science demands collaboration—not just with other health-care professionals—but with a variety of stakeholders who may be from policy, political, economist, education, information technology, commerce or service delivery backgrounds.41 This level of collaborative work requires us to be more connected, so becoming involved in professional networks, systems-based scientific networks and community-based participatory research is important for practitioners.42, 43 Finally, we need to become more discerning users of evidence and insist on better standards of intervention trial reporting so that research can be translated. If we lack a complete description of an intervention, then it becomes impossible to replicate or build on those findings. We know that the reporting of interventions is surprisingly poor across the board,44 and we need to change this. The use of the template for intervention description and replication (TIDierR) checklist would be one way that we could improve these standards of reporting.45 Another aspect here is implementation fidelity, which is the degree to which an intervention was delivered as intended.46 As busy practitioners, we know that consistently completing every step in a process is challenging, yet when we use the evidence from interventions, we rarely check for fidelity data.47 Intervention fidelity assessment acts in two positive ways: it provides a feedback mechanism to improve practitioner performance and ultimately the outcomes of the intervention, and it increases transparency around the processes needed to maintain and sustain an intervention.47 Surely, this is a win-win situation. Implementation science can help us improve the impact of our work, and we need more of it if we are to deliver better dietetic care in the future. The author did not receive funding to write this manuscript. The author has no conflict of interest to declare. Sharleen L. O'Reilly was the sole author of this manuscript.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call