Abstract

You have accessThe ASHA LeaderFeature1 Jun 2004Evidence-Based Practice in AAC Ralf W. Schlosser Ralf W. Schlosser Google Scholar More articles by this author https://doi.org/10.1044/leader.FTR3.09122004.6 SectionsAbout ToolsAdd to favorites ShareFacebookTwitterLinked In Evidence-based practice (EBP) is increasingly being recognized as the preferred way to conduct practice in speech-language pathology in general (see Dollaghan) and augmentative and alternative communication (AAC) in particular. Examples of its growing importance in AAC include a special issue of Perspectives in Augmentative and Alternative Communication devoted to EBP, recently invited seminars at the ASHA Convention, a special issue of Augmentative Communication News, and the recent publication of a monograph (Schlosser, 2003a). EBP has its origins in the field of medicine where it has come to be known as evidence-based medicine (EBM). Many definitions of EBM/EBP have been proposed. For example, EBM has been defined as “…the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients…[by] integrating individual clinical expertise with the best available evidence from systematic research” (Sackett et al., 1996). Although frequently cited, this definition has shortcomings. If decisions are being made about the care of individual patients, why are their perspectives (based on this definition) not being integrated with research evidence and clinical expertise? In AAC we have a long-standing awareness of the crucial role of the individual using AAC and other relevant stakeholders in decision-making and evaluating the impact of our services and interventions. Hence, the viewpoints, preferences, concerns, and expectations of those who directly or indirectly control the viability of an assessment or intervention (e.g., individuals using AAC, family members, caregivers, friends, etc.) need to be integrated with clinical expertise and research evidence. Therefore, Schlosser and Raghavendra define EBP in AAC as “the integration of best and current research evidence with clinical/educational expertise and relevant stakeholder perspectives, in order to facilitate decisions about assessment and intervention that are deemed effective and efficient for a given direct stakeholder” (p. 3). In most cases, the direct stakeholder is the person who is, or will be, using AAC by virtue of being the direct recipient of any ensuing decisions arising from the EBP process. This definition emphasizes three cornerstones- research evidence, clinical/educational expertise, and relevant stakeholder perspectives-that need to be integrated through the EBP process. To my mind, EBP does not imply that evidence is declared the authority and all other cornerstones have to take a back seat. The reason EBP has emphasized “evidence” in its name and much of its early writing is largely historic. After all, it is research evidence that has been traditionally neglected in practice. In coining the term, EBP pioneers perceived a need to highlight this aspect. Whether this continues to be appropriate is open to debate. While the intended emphasis of EBP rests on the shared integration of the three cornerstones, my colleagues and I have argued the primacy of relevant stakeholder perspectives in moving this integration process to decision-making (see Schlosser & Prabhu, 2004a; 2004b). This is illustrated in the diagram on p. 10. Suggestions In her recent article in The ASHA Leader, Dollaghan dispelled some of the EBP myths. In addition to debunking some further myths surrounding EBP (see p. 7), I offer suggestions as to what clinicians can do today to move their practice toward EBP in AAC. It is evident that clinicians will require better and more innovative resources if they are to implement such changes in their practice. When asking questions provide sufficient context. A question such as “Will AAC use enhanced natural speech production?” is legitimate but might prove too general in order to be truly useful in a particular direct stakeholder's decision making. The context has to include a description of the direct stakeholder, his or her current and future environments, other relevant stakeholders, the problem to be solved, and the anticipated outcomes. This added context allows practitioners to better evaluate the relevance of research evidence to their particular client. When searching for evidence it is best to look in more than one place. AAC research is scattered across several disciplines and larger fields. The approach to a search has to be consistent with this characteristic in order to navigate this evidence successfully. A one-sided approach may result in crucial evidence being ignored. Clinicians may need to consult several general-purpose databases such as the Cumulative Index of Allied Health Literatures (CINAHL), Language and Linguistics Behavior Abstracts (LLBA), Medline, and PsycINFO. Seek out reviews first. If relevant pre-filtered evidence such as reviews can be located, they may save time and energy because someone else has already sought out studies, appraised them, and synthesized them. Reviews may be located in database searches by combining content key words (e.g., Communication-Aids-for-Disabled) with quality filters such as Review, Meta-Analysis, or Systematic Review. The specific terminology varies with the database. If such quality filters are not successful, the search could proceed by combining content key words with appropriate free-text words such as “review.” Using “review” as a free-text word identifies any article in which the term is used in its title, abstract, or body of the text. Discriminate among the various reviews, depending on how systematic they are. Similar to original research studies, reviews vary widely in terms of quality. The more systematic the review the better its quality tends to be. Systematic reviews specify criteria for inclusion and exclusion of studies; state where and how the authors searched for studies; tell the reader how they extracted the data from the studies; and substantiate their overall conclusion with individual studies. Meta-analysis is a form of a systematic review in which effect sizes are calculated for interventions across studies to provide a quantitative measure/estimate of the intervention effect. As such they can be very informative for EBP. Whereas meta-analyses tend to be very systematic, narrative reviews tend to be much less systematic. When appraising evidence from original research studies consult an appropriate hierarchy of evidence. Hierarchies of evidence are useful because they allow the clinician to identify the level of evidence in terms of the design used in the study. To do so effectively, one needs to choose a hierarchy that is suitable to AAC (Schlosser & Raghavendra). Hierarchies proposed by other fields may not be appropriate. In addition, one needs to keep in mind the purpose of the decision to be made. That is, a hierarchy to aid in the selection of a treatment approach necessarily needs to be different from a hierarchy aiding an assessment question (see also Robey). Consider factors beyond design hierarchies to adequately appraise evidence. Clinicians can readily identify the level of evidence within a hierarchy if there is adequate information in the original study. Just because a researcher used a certain design does not mean that the design was carried out how it is supposed to be carried out. Thus, clinicians may need to ask whether the design includes all the appropriate features that signify this design; when in doubt consult a researcher in your field who may be able to clarify the issues. Second, when appraising treatment studies it is important to assess whether the treatment has adequate integrity. In other words, did the experimenter carry out the treatment as planned? Look for interobserver agreement on the implementation of treatment. Third, the outcome variable reported in the study has to be reliable. Look for interobserver agreement on the dependent measures. Checklists for the appraisal of studies are available in Schlosser (2003a). Determine the relevance of the evidence for the question at hand. In addition to appraising the quality of the evidence, clinicians need to determine the degree to which the identified evidence is relevant to their particular question. What issues might be considered? To what degree do the participants in the studies present with similar characteristics as my direct stakeholder? The more similarities, the more relevant the evidence. To what degree do the settings in which the studies were carried out correspond to the settings in which you seek to apply the evidence? To what extent are the treatment agents qualified and experienced in comparison to the clinicians who will be carrying out the intervention with my direct stakeholder? Have relevant stakeholders found the outcomes in the studies to be socially significant? Look for social validation data. If so, to what degree are these relevant stakeholders similar to the relevant stakeholders influencing the viability of the intervention with my direct stakeholder? These are some of the factors to consider. Document the appraised evidence. Document your appraisal through Critically Appraised Topics (CATs). CATs are short digests that summarize the search and appraisal of evidence related to a focused question (an AAC example of a CAT is available upon request from the author). CATs are updated on an as-needed basis and kept in an accessible place for the next time the same or a similar question arises. Keep in mind that evidence does not make decisions. When integrating the evidence with your clinical expertise and relevant stakeholder perspectives through a shared process, it is crucial to remember that the ultimate decision as to how to proceed will be with the direct stakeholder and other relevant stakeholders. Collaborate and share your experiences. It is important to work with other like-minded practitioners to create a community of practice around EBP. A journal club at your place of work may be a viable vehicle by which to learn together. If clinicians shared their EBP experiences with other clinicians and researchers (e.g., by sharing CATs, publishing in newsletters, presenting at conferences) the benefits for practice could be maximized and valuable directions for future practice-oriented research would emerge. Myths & Realities Myth: EBP is impossible to implement because we do not have enough evidence. Reality: EBP can be implemented regardless of the size of the research base. The fact is that we will never have enough evidence. The notion of the best and most current research evidence is relative rather than absolute; sometimes a case study is the best and most current evidence available. Myth: EBP already exists. Reality: Although there may be some practitioners who implement EBP to some extent there are also many more practitioners who take little or no time to review current AAC research findings. Myth: EBP declares evidence the authority. Reality: The diagram on p. 10 illustrates that evidence needs to be integrated with clinical/educational expertise and relevant stakeholder perspectives with the ultimate decision-making authority resting with relevant stakeholders. Myth: EBP is a cost-cutting mechanism. Reality: EBP focuses on the best available evidence to be integrated with clinical/educational expertise and relevant stakeholders' perspectives. This called-for integration does not dictate a decision that is always less expensive. Myth: EBP is cookie-cutter practice. Reality: EBP requires not only extensive clinical expertise but also skillful integration of all three cornerstones of EBP. This integration is more likely to be novel from one direct stakeholder to the next than the same. Thus, it requires the application of principles that need to be adapted to specific situations and different mixes of information. Such knowledge and skills are inconsistent with a cookie-cutter approach. Myth: EBP is impossible to put in place. Reality: The implementation of EBP is a matter of degree. Individual clinicians even with less extensive effort can accomplish some degree of EBP. With ASHA's permission adapted from Schlosser (2003c) References Corwin M., & Koul R. (2003). Augmentative and alternative communication intervention for individuals with chronic severe aphasia: An evidence-based practice process illustration. Perspectives on Augmentative and Alternative Communication, 12(4), 11–15. LinkGoogle Scholar Dollaghan C. (2004, April 13). Evidence-based practice: Myths and realities. The ASHA Leader, p. 12. Google Scholar Law M. (2002). Evidence-based rehabilitation: A guide to practice. Thoroughfare, NJ: Slack Incorporated. Google Scholar Olsson C. (2003). The EBP experiences of an AAC service provider: Diving in deep. Perspectives on Augmentative and Alternative Communication, 12(4), 15–18. LinkGoogle Scholar Robey R. (2004, April 13). Levels of evidence. The ASHA Leader, p. 5. Google Scholar Sackett D. L., Richardson W. S., Rosenberg W., & Haynes R. B. (1997). Evidence-based medicine: How to practice and teach EBM. New York: Churchill Livingstone. Google Scholar Sackett D. L., Rosenberg W. M., Gray J. A., Haynes R. B., & Richardson W. S. (1996). Evidence-based medicine: What it is and what it isn't. British Medical Journal, 312(7023), 71–72. CrossrefGoogle Scholar Schlosser R. W. (2002). EBP process illustration. Augmentative Communication News, Issue 3–4. Google Scholar Schlosser R. W. (2003a). The efficacy of augmentative and alternative communication: Toward evidence-based practice. New York, London: Academic Press. Google Scholar Schlosser R. W. (2003b). Evidence-based practice: Meeting the challenge. Perspectives on Augmentative and Alternative Communication, 12(4), 3–4. LinkGoogle Scholar Schlosser R. W. (2003c). Evidence-based practice: Frequently asked questions, myths, and resources. Perspectives on Augmentative and Alternative Communication, 12(4), 4–7. LinkGoogle Scholar Schlosser R. W. (in press). Hierarchies of evidence: Considerations for augmentative and alternative communication. In Tetzchner S. von & Clibbens J. (Eds.), Issues and trends in augmentative communication theory and research. London: ISAAC. Google Scholar Schlosser R. W., & Prabhu A. (2004a). Evidence does not make decisions: An argument for the primacy of relevant stakeholders in evidence-based practice. Invited paper presented at the International Disability and Rehabilitation Conference, Johannesburg, South Africa. Google Scholar Schlosser R. W., & Prabhu A. (2004b). Interrogating evidence-based practice through a humanistic lens. Paper to be presented at The Second International Conference on New Directions in the Humanities, June, Monash University, Prato Campus, ITALY. Google Scholar Schlosser R. W., & Raghavendra P. (2004). Evidence-based practice in augmentative and alternative communication. Augmentative and Alternative Communication, 20, 1–21. CrossrefGoogle Scholar Sigafoos J., & Drasgow E. (2003). Empirically validated strategies, evidence-based practice, and basic principles in communication intervention for learners with developmental disabilities. Perspectives on Augmentative and Alternative Communication, 12(4), 7–10. LinkGoogle Scholar Author Notes Ralf W. Schlosser, is associate professor in the Department of Speech-Language Pathology and Audiology at Northeastern University in Boston. As a former member of the Steering Committee of Special Interest Division 12, Augmentative and Alternative Communication, he contributed to the AAC Knowledge and Skills Document and the Technical Report. Contact him via e-mail at [email protected]. Additional Resources FiguresSourcesRelatedDetails Volume 9Issue 12June 2004 Get Permissions Add to your Mendeley library History Published in print: Jun 1, 2004 Metrics Downloaded 2,794 times Topicsasha-topicsleader_do_tagleader-topicsasha-article-typesCopyright & Permissions© 2004 American Speech-Language-Hearing AssociationLoading ...

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call