Abstract

It is not uncommon to hear a clinician say something like this:“The base wedge osteotomy is the best procedure for correction of a bunion deformity with an intermetatarsal angle that is greater than 16°.”Or, you may hear a statement like this:“A distal metaphyseal osteotomy is the best choice for correction of a bunion deformity, even when the intermetatarsal angle is greater than 16°.”We have all heard similar claims made by experienced surgeons in regard to surgical interventions. We may have even made such statements ourselves from time to time. Of course, both of the statements may be correct (or incorrect) to a certain degree, and the experienced practitioner understands that treatments need to be individualized to the needs of specific patients. The prudent practitioner also understands that the selection of a specific treatment has to be in keeping with the skills and experience of the individual surgeon. Selecting a specific treatment for an individual patient, therefore, entails a thought process that involves the application of two forms of observational clinical evidence: 1) the surgeon’s experience and 2) the patient’s need. For many conditions, these two forms of clinical evidence provide the evidence on which clinical decisions are made, and, for many of us, they have been the basis for our practice of “evidence-based medicine” for many years. For many diagnostic, therapeutic, and prognostic questions, however, the published literature usually contains other forms of clinical evidence that may be useful when we are trying to decide what may be best for our patients. In this editorial, I would like to briefly describe some of the different forms of clinical evidence that are available for our use, and categorize these in order of their likelihood to provide valid results that can be used to formulate meaningful conclusions about patient management (Figure). Throughout this discourse, I urge the reader to keep in mind that all of these forms of evidence have some degree of meaning in the clinical realm, and it is the clinician’s responsibility to critically appraise the evidence and its meaning before making a clinical decision.The randomized, controlled trial (RCT) is generally considered to be the method of clinical investigation that is most likely to yield valid results and meaningful conclusions. The RCT is actually a clinical experiment that tests a hypothesis related to patient care. The reason that results from the RCT are most likely to be valid is that this form of clinical investigation uses scientific methodology that limits bias in the investigation. Key elements of the well-designed RCT include a clearly stated clinical question, well-defined primary and secondary aims, as well as detailed inclusion and exclusion criteria; the use of valid health measurements and analysis of all of the variables that a reasonable clinician would consider important; randomization of participants to different treatment groups and the use of an intention-to-treat analysis (where participants are analyzed as they were randomized), thus limiting selection bias; blinding of participants and outcomes assessors, thus limiting measurement bias; the use of an adequate sample size, thus affording the ability to detect a statistically significant difference (a difference that is not likely to be due to chance alone) if one exists; and the use of statistical tests that fit the type and distribution of the data, thus decreasing the likelihood of making incorrect statistical assumptions. Although it is not a guarantee, use of these building blocks of clinical evidence increases the likelihood that the results of an investigation will be valid.Unfortunately, some RCTs are not well designed, and, as a result of methodological faults, they may convey systematic biases that threaten the validity of the results and conclusions. Furthermore, some scientifically sound RCTs may have used such restrictive inclusion or exclusion criteria, or unrealistic interventional protocols, that the results and conclusions are not generalizable (they do not convey external validity) to clinical practice, despite the fact that within the confines of the experiment (internal validity), the results were likely to be valid. In other words, the outcome of an intervention, defined in terms of efficacy within the confines of the investigation, may or may not translate into effectiveness in the “real world” clinical setting. When performing an RCT, investigators strive to balance restrictions that limit the “real world” nature of the study while trying to limit biases imparted by maintenance of a “real world” environment.When the results of RCTs are not available, either because such studies would be unethical or not feasible, then observational investigations enable hypothesis testing and provide the next highest level of clinical evidence. Such investigations include prospective and retrospective cohort studies, and the case-control study. With the exception of randomization to different therapies, these investigations use, to varying degrees, the building blocks of good clinical evidence, and, to varying degrees, they also limit bias. As such, they are analytical and observational rather than interventional experiments. Cohort studies enable investigators to calculate the incidence of an outcome of interest over time. Prospective investigations also enable the investigators to plan, a priori, to measure all of the independent variables that a reasonable clinician would consider important in regard to the outcome of interest. Such information may be missing when clinical data is analyzed retrospectively, and, in such studies, a sensitivity analysis can be useful in determining just how resistant the results are to the potential influence of an unmeasured confounding variable. Furthermore, in prospective cohort studies, the influence of selection bias can be somewhat limited when participants are enrolled in the investigation in a consecutive fashion.Hypothesis-generating clinical investigations include analyses of secular trends and cross-sectional studies that enable investigators to measure the prevalence of an outcome of interest at a point in time. Such investigations do not provide any method for determining why a specific association exists between a cause (exposure or risk factor) and an effect (outcome); however, they are very important when it comes to measuring the burden of a disease on a population and, as such, can be useful to government and insurance agencies, and they are also useful for determining sample sizes for RCTs. Case series and case reports are also useful hypothesis-generating forms of clinical investigation. Classically, the case report, or a small series, is an important way of presenting to the scientific community the results of a rare disease or an unusual treatment. Such reports, by their nature, are full of biases that can influence the outcome of the case, and readers should always be aware of this limitation relative to the quality of the clinical evidence conveyed by a case report or series.Although animal studies and experiments on cadaveric and plastic bone models are crucial when it comes to establishing a basic understanding of the safety and efficacy of an intervention, or the specific response of, say, a hardware fixation construct to mechanical loads, these forms of clinical evidence are far removed from the actual clinical setting, and, as such, conclusions based on the results of these studies may not translate to the clinical realm. Regardless of the type of clinical investigative methodology used, it is important for clinical investigators to understand and list the limitations of their study. Failure to discuss the limitations of a study is, in and of itself, a shortcoming of the report of an investigation.So, to get back to the question of choosing between a base wedge osteotomy versus a distal metaphyseal osteotomy for a bunion in the presence of an intermetatarsal angle >16°, the interested clinician practicing evidence-based medicine will note that, at present, there is no existing RCT that compares the two treatments with one another in patients with such a deformity. Therefore, the clinician is forced to fall back on lesser forms of clinical evidence and combine these with his or her personal experience and the patient’s needs, and then determine an appropriate treatment plan. In summary, evidence-based medicine consists of three pillars of clinical evidence: the surgeon’s experience, the patient’s needs, and the scientific information that is available in regard to a particular condition or treatment. Each of these elements is important, although information gained by means of rigorous scientific investigation is less likely to be tarnished by bias and, as a result, is more likely to be valid. It is the clinician’s responsibility to critically appraise the medical literature and to combine the results of scientific investigations with one’s own experience and the needs of the patient, and to formulate a suitable treatment plan based on this combination of clinical evidence.The interested reader is encouraged to review the following recommended articles related to EBM and the hierarchy of clinical evidence:Sacks H, Chalmers TC, Smith H Jr. Randomized versus historical controls for clinical trials. Am J Med 72:233–240, 1982.Sackett DI, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ 312:71–72, 1996.Benson K, Hartz AJ. A comparison of observational and randomized controlled trials. N Engl J Med 342:1878–1886, 2000.Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies and the hierarchy of research designs. N Engl J Med 342:1887–1892, 2000.Turlick MA, Kushner D, Stock D. J Am Podiatric Med Assoc 93:392–398, 2003. It is not uncommon to hear a clinician say something like this: “The base wedge osteotomy is the best procedure for correction of a bunion deformity with an intermetatarsal angle that is greater than 16°.” Or, you may hear a statement like this: “A distal metaphyseal osteotomy is the best choice for correction of a bunion deformity, even when the intermetatarsal angle is greater than 16°.” We have all heard similar claims made by experienced surgeons in regard to surgical interventions. We may have even made such statements ourselves from time to time. Of course, both of the statements may be correct (or incorrect) to a certain degree, and the experienced practitioner understands that treatments need to be individualized to the needs of specific patients. The prudent practitioner also understands that the selection of a specific treatment has to be in keeping with the skills and experience of the individual surgeon. Selecting a specific treatment for an individual patient, therefore, entails a thought process that involves the application of two forms of observational clinical evidence: 1) the surgeon’s experience and 2) the patient’s need. For many conditions, these two forms of clinical evidence provide the evidence on which clinical decisions are made, and, for many of us, they have been the basis for our practice of “evidence-based medicine” for many years. For many diagnostic, therapeutic, and prognostic questions, however, the published literature usually contains other forms of clinical evidence that may be useful when we are trying to decide what may be best for our patients. In this editorial, I would like to briefly describe some of the different forms of clinical evidence that are available for our use, and categorize these in order of their likelihood to provide valid results that can be used to formulate meaningful conclusions about patient management (Figure). Throughout this discourse, I urge the reader to keep in mind that all of these forms of evidence have some degree of meaning in the clinical realm, and it is the clinician’s responsibility to critically appraise the evidence and its meaning before making a clinical decision. The randomized, controlled trial (RCT) is generally considered to be the method of clinical investigation that is most likely to yield valid results and meaningful conclusions. The RCT is actually a clinical experiment that tests a hypothesis related to patient care. The reason that results from the RCT are most likely to be valid is that this form of clinical investigation uses scientific methodology that limits bias in the investigation. Key elements of the well-designed RCT include a clearly stated clinical question, well-defined primary and secondary aims, as well as detailed inclusion and exclusion criteria; the use of valid health measurements and analysis of all of the variables that a reasonable clinician would consider important; randomization of participants to different treatment groups and the use of an intention-to-treat analysis (where participants are analyzed as they were randomized), thus limiting selection bias; blinding of participants and outcomes assessors, thus limiting measurement bias; the use of an adequate sample size, thus affording the ability to detect a statistically significant difference (a difference that is not likely to be due to chance alone) if one exists; and the use of statistical tests that fit the type and distribution of the data, thus decreasing the likelihood of making incorrect statistical assumptions. Although it is not a guarantee, use of these building blocks of clinical evidence increases the likelihood that the results of an investigation will be valid. Unfortunately, some RCTs are not well designed, and, as a result of methodological faults, they may convey systematic biases that threaten the validity of the results and conclusions. Furthermore, some scientifically sound RCTs may have used such restrictive inclusion or exclusion criteria, or unrealistic interventional protocols, that the results and conclusions are not generalizable (they do not convey external validity) to clinical practice, despite the fact that within the confines of the experiment (internal validity), the results were likely to be valid. In other words, the outcome of an intervention, defined in terms of efficacy within the confines of the investigation, may or may not translate into effectiveness in the “real world” clinical setting. When performing an RCT, investigators strive to balance restrictions that limit the “real world” nature of the study while trying to limit biases imparted by maintenance of a “real world” environment. When the results of RCTs are not available, either because such studies would be unethical or not feasible, then observational investigations enable hypothesis testing and provide the next highest level of clinical evidence. Such investigations include prospective and retrospective cohort studies, and the case-control study. With the exception of randomization to different therapies, these investigations use, to varying degrees, the building blocks of good clinical evidence, and, to varying degrees, they also limit bias. As such, they are analytical and observational rather than interventional experiments. Cohort studies enable investigators to calculate the incidence of an outcome of interest over time. Prospective investigations also enable the investigators to plan, a priori, to measure all of the independent variables that a reasonable clinician would consider important in regard to the outcome of interest. Such information may be missing when clinical data is analyzed retrospectively, and, in such studies, a sensitivity analysis can be useful in determining just how resistant the results are to the potential influence of an unmeasured confounding variable. Furthermore, in prospective cohort studies, the influence of selection bias can be somewhat limited when participants are enrolled in the investigation in a consecutive fashion. Hypothesis-generating clinical investigations include analyses of secular trends and cross-sectional studies that enable investigators to measure the prevalence of an outcome of interest at a point in time. Such investigations do not provide any method for determining why a specific association exists between a cause (exposure or risk factor) and an effect (outcome); however, they are very important when it comes to measuring the burden of a disease on a population and, as such, can be useful to government and insurance agencies, and they are also useful for determining sample sizes for RCTs. Case series and case reports are also useful hypothesis-generating forms of clinical investigation. Classically, the case report, or a small series, is an important way of presenting to the scientific community the results of a rare disease or an unusual treatment. Such reports, by their nature, are full of biases that can influence the outcome of the case, and readers should always be aware of this limitation relative to the quality of the clinical evidence conveyed by a case report or series. Although animal studies and experiments on cadaveric and plastic bone models are crucial when it comes to establishing a basic understanding of the safety and efficacy of an intervention, or the specific response of, say, a hardware fixation construct to mechanical loads, these forms of clinical evidence are far removed from the actual clinical setting, and, as such, conclusions based on the results of these studies may not translate to the clinical realm. Regardless of the type of clinical investigative methodology used, it is important for clinical investigators to understand and list the limitations of their study. Failure to discuss the limitations of a study is, in and of itself, a shortcoming of the report of an investigation. So, to get back to the question of choosing between a base wedge osteotomy versus a distal metaphyseal osteotomy for a bunion in the presence of an intermetatarsal angle >16°, the interested clinician practicing evidence-based medicine will note that, at present, there is no existing RCT that compares the two treatments with one another in patients with such a deformity. Therefore, the clinician is forced to fall back on lesser forms of clinical evidence and combine these with his or her personal experience and the patient’s needs, and then determine an appropriate treatment plan. In summary, evidence-based medicine consists of three pillars of clinical evidence: the surgeon’s experience, the patient’s needs, and the scientific information that is available in regard to a particular condition or treatment. Each of these elements is important, although information gained by means of rigorous scientific investigation is less likely to be tarnished by bias and, as a result, is more likely to be valid. It is the clinician’s responsibility to critically appraise the medical literature and to combine the results of scientific investigations with one’s own experience and the needs of the patient, and to formulate a suitable treatment plan based on this combination of clinical evidence. The interested reader is encouraged to review the following recommended articles related to EBM and the hierarchy of clinical evidence: Sacks H, Chalmers TC, Smith H Jr. Randomized versus historical controls for clinical trials. Am J Med 72:233–240, 1982. Sackett DI, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ 312:71–72, 1996. Benson K, Hartz AJ. A comparison of observational and randomized controlled trials. N Engl J Med 342:1878–1886, 2000. Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies and the hierarchy of research designs. N Engl J Med 342:1887–1892, 2000. Turlick MA, Kushner D, Stock D. J Am Podiatric Med Assoc 93:392–398, 2003.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call