Abstract

Imagine a physician who wants to research options to help her patients lose weight. As a clinical researcher, she may first explore the efficacy of a medication. Not only is there an instrument that accurately collects patient weight but also the link between the intervention (medication) and the outcome (weight) has been established. Her study manipulates the behavior of the physician (what should be prescribed), and the intervention is administered to the patient, who in this case, is a relatively passive recipient.If, however, our clinical researcher wants to explore an educational intervention, she has to not only design the intervention but also establish its link to weight loss. Should the educational intervention teach patients about the calamitous effects of obesity? Will that (ie, giving patients knowledge) affect a decrease in weight? Should she target patient behaviors? What outcomes other than patient weight might be affected (eg, patient attitudes)? Because her participants were actively engaged in the intervention (and cannot be “blinded” to their participation), how should she interpret the effect? If she finds a minimal or negligible decrease in patient weight, was the intervention a failure, or were patients interpreting the intervention differently?These are difficult questions with no easy answers. In fact, our clinical researcher may feel overwhelmed by, and ill prepared for, addressing many of these questions. Although some clinicians apply research skills obtained through clinical research experiences to education research—and although there are commonalities between clinical and education research—the subtle differences in the nature of education research require specific skills.Regehr1 described the costs associated with using a physical science approach to study an education research problem, noting that application of scientific research goals—including proof, problem simplification, and favoring generalizable solutions over context specificity—to medical education research may be responsible for stalled progress in the field. Such misplaced application may come from the deceptive similarity of research “language” within the 2 fields. A similar problem can occur when transitioning from clinical to education research. After all, both clinical and education researchers identify research questions, discuss “interventions,” and draw on observational research designs. Yet, semantics can be misleading, and there are important distinctions in the paradigms of clinical and education research. Misapplication of clinical research skills in education research can result in studies that are poorly conceived, interventions that are ineffective, and projects that waste resources.We acknowledge that there are many clinical researchers who bring a rigorous process of inquiry and exploration to a new discipline while drawing on others' expertise in the new field. For these clinical researchers, who see the distinctions between the 2 fields, the “noise” of education research might be challenging, interesting, and even fun. These clinical researchers have discovered that their skills are the foundation for education research, but often are not sufficient to lead and complete an education research study. For clinical researchers who are considering a foray into education, this editorial illuminates some of the subtle differences between clinical and education research.Typically, clinical research explores the application of an intervention on patient outcomes. In many cases, the intervention is a drug or other therapy. Before clinical researchers begin their study, basic science researchers have already tackled the innovation and development of the new medication or therapy that will be studied. The clinical researcher measures the effectiveness of the intervention on patients by tracking a variety of outcomes. In education, the development of the innovation and the study of its effectiveness are often rolled into a single larger project, with the education researcher responsible for both facets. In fact, education researchers are frequently involved in all aspects of innovation; they assess need, design educational programs, and construct instruments to measure program effectiveness. Cook2 contrasts the development and testing of interventions in physical sciences and medical education. He argues:To study such educational interventions, education researchers identify and measure outcomes beyond biological and physiological indicators. Educational interventions often target psychological or latent constructs rather than physical measures. A latent construct is something that we believe exists but that cannot be directly observed. Examples in education research include, among others, resident motivation, patient satisfaction, and physician empathy, well-being, and resilience. Researchers believe these constructs exist and that they influence whether and how individuals learn and perform, yet they are challenging to measure because they cannot be “seen” or precisely quantified. Instead, instruments must be carefully constructed to measure observable aspects of these latent constructs. A detailed discussion of how to do this is beyond the scope of this editorial but can be found in the literature.3,4Medical education researchers need to focus careful attention not only on the development of these tools but also on their adoption by participants in the research study. Researchers need to be wary of employing data collection instruments without paying proper attention to their development. Assessment tools need to be conceptualized and calibrated in a specific way. Validity evidence for a data collection instrument is collected from, and applied to, groups of participants in specific situations, specific locations, and at specific points in time. Thus, to say that a survey instrument is “valid and reliable” is inaccurate; researchers collect reliability and validity evidence for their instruments “in a specified context, with a particular sample, and for a particular purpose.”5 As such, education researchers need to assess the “validation” of their research tools as an accumulation of evidence to support the tool's intended use.6Although developing or acquiring the tools to measure latent constructs can be difficult, identifying which latent constructs to study can also be challenging. Clinical researchers may rely on knowledge of disease states and biological and physiological processes to identify appropriate indicators to manipulate and measure. Education researchers rely on the processes described by theories. The role of theory is critical to understand the application of education research. Having a theoretic component to one's research “allows for a reasoned choice of action that can be justified to oneself and discussed with others.”7 What's more, theory helps articulate what variables might be linked together and why, and allows the researcher to make hypotheses and define the context and scope of a study, within the context of the investigator's local perspective.In its description of learning health systems, the Institute of Medicine8 called for clinicians to view research and practice as processes linked together in a pathway toward high-quality care, such that practice identifies gaps in knowledge and research fills in those gaps. In education, the gap in knowledge is viewed from a theoretic perspective, allowing the researchers to highlight the cognitive, behavioral, and/or social forces at play.7How does this translate into practice? Our clinician studying an educational intervention to help her patients lose weight might apply a theory explaining the link between knowledge and/or attitudes with weight loss behaviors (eg, exercise). Theories offer a way to view phenomena or educational processes more holistically, which allows for better study designs that consider the interactions at play in human behavior, knowledge, skills, and attitudes. Articulating relevant theory does more than justify a study approach; it allows for appropriately targeted exploration and interpretation and gives other researchers and practitioners a framework for understanding how an individual study fits within the field of education research.Clinical research relies on methodological and statistical prowess to isolate the effects of a specific therapeutic intervention, and confounding variables in a hypothesis-testing model are seen as noise that shrouds the signal. In education research, hypothesis-testing also requires attention to confounders. The subtle difference is the nature of the intervention, which may be more complicated than it appears. Consider the “straightforward” study measuring the effect of a curriculum on communication skills in new residents. The study includes a presurvey, an educational session, and, at a later date, a postsurvey. One might anticipate that biases and confounders could diminish the effectiveness of the intervention, yet construing the confounders as simply “nuisance variables” may miss the complexity of the intervention and the reasons why it may or may not “work.” Such misunderstanding occurs in education research when researchers forget that the dynamic local context of the educational session may, in fact, be a critical component to its effectiveness.Consider an intervention that includes a new type of teaching method. What would happen if the teaching session was advertised and introduced by a well-liked, charismatic dean? If positive outcomes were observed, is it plausible that those outcomes were the result, at least in part, of the dean's enthusiasm and charisma? Norman9 made a similar point when he said, “In trying to replicate the intervention, do we train teachers all about [the intricacies of the new intervention]? Or do we just tell them to go out and be enthusiastic?” In other words, when we equate teaching a lesson with prescribing a drug, we strip away crucial information about why or how the intervention engaged participants in the first place.Herein lies the considerable effect of context in education research. Artino10 implored education researchers to consider this complexity in his overview of situated cognition, a theory of how humans interact with their environment to think and learn. He noted that education research must capture the complexity of context by measuring not only the learners, but also the teachers and patients, and the interactions between them. Our educational interventions are almost always more than just a set of PowerPoint (Microsoft) slides or handouts. Patricio and Vaz Carneiro11 provide an example of this phenomenon:The inherent complexity in medical education, and thus, medical education research, has been described and debated at length.1,12–14 Renowned medical education experts disagree about how best to address that complexity when conducting research. Although education researchers struggle with a pathway toward explaining context and complexity, so too do clinical researchers. According to the Institute of Medicine,8 clinical trials have become so stripped of context that their relevance is unclear.One way that education researchers have explored the complex systems of education is through qualitative inquiry, the adoption of which has been more frequent in education research than in clinical research. Qualitative methods are essential because they elucidate how and why interventions might (or might not) work, and they provide “a rigorous alternative to armchair hypothesizing”15 for specific problems. Qualitative researchers trade generalizability for more robust explorations into the complex nature of interactions in education, often at the local level. In-depth understanding of that complexity better prepares researchers to establish interventions appropriate for the local environment.Indeed, understanding the context is a reasonable starting point for any line of inquiry in education research, either by review of the literature or by qualitative exploration. Ultimately, education research requires comfort with methods that elucidate the inherent limitations and uncertainties accompanying the complexities of teaching and learning.Answering education and clinical research questions draws on some similar skills but ultimately relies on different expertise. A researcher comfortable with randomized controlled trials is likely unprepared for an exploration of a social-cognitive perspective on teaching end-of-life discussions to residents. Integration of theory, definition of constructs, and development of instruments are often overwhelming and intimidating processes for novice education researchers and for clinical researchers seeking to directly apply their research skills. Although a background of inquiry and scholarly approaches provides common ground for clinical and education researchers, many concepts require a reframing of assumptions. Some elements where the 2 disciplines differ are discussed in this article, but several are not, including appropriate selection of outcomes, Institutional Review Board considerations, funding opportunities (or lack thereof), and outlets for publishing.Clinical researchers would be well served to take a few simple steps to prepare themselves for the potentially bumpy transition from clinical to education research. First, clinical researchers should review relevant education research articles before embarking on an education study. Second, clinical researchers should reach out to experienced education researchers to get feedback and advice, as well as guidance on potential problems. Third, clinical researchers should explore the avenues for disseminating educational scholarship.Clinicians engage in education every day—whether with trainees, colleagues, or patients—and research is a valuable tool for improving their understanding and capabilities as educators. Clinical researchers who supplement their knowledge and skills with the concepts discussed in this editorial will significantly increase their likelihood of being successful in education research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call