HEALTH CARE–ACQUIRED INFECTIONS, PARTICULARLY those due to antimicrobial-resistant bacteria, have received significant attention in recent years. Despite work focused on elucidating the epidemiology and effects of such infections, success in curbing their emergence remains elusive. Few new classes of antibiotics are even in the earliest stages of development, making efforts to prevent the emergence and spread of antimicrobial-resistant bacteria even more crucial. However, the evidence base on which potential infection-prevention strategies must be built is severely limited because very few of the necessary clinical trials have been conducted. The generation of such scientific evidence is constrained by the perceived difficulty of completing the necessary studies and limited federal funding available for assessing infection-prevention interventions. An increasing number of legislative mandates, such as the state of Illinois’ methicillin-resistant Staphylococcus aureus (MRSA)activesurveillancemandate,havebeenenactedtoforce the implementation of infection-prevention strategies. Many of these mandates lack a strong scientific foundation and considerable resources are now devoted to supporting them without a clear template for how to evaluate their clinical and economic ramifications. To effectively protect patients, rigorous studies must be conducted to assess the comparative effectiveness of different infection-prevention interventions. Thus, scientists and funding agencies need to recognize the strengths and limitations of the epidemiological methods that could be used to address these critical questions. This commentary focuses on 3 complementary methods for comparative effectiveness research in infection prevention: cluster randomized trials, quasi-experimental studies, and mathematical models. A well-designed and adequately powered randomized controlled trial (RCT) provides the most rigorous evidence for or against the efficacy of a given intervention. In health care epidemiology, interventions to reduce device-related infections (eg, antimicrobial-coated central venous catheters) are often amenable to an RCT investigative approach because the intervention and the observed benefit occur at the level of a single patient and the effect of the intervention for one patient is independent of the effect on a different patient. However, many interventions focus on population-level strategies and are not amenable to an RCT approach. For example, MRSA screening programs test patients for MRSA carriage and isolate colonized patients to prevent transmission of MRSA. These screening programs indirectly benefit patients who are not isolated. To assess population-level interventions, alternatives to RCTs are needed. The cluster randomized trial is well suited to study the comparative effectiveness of population-level interventions. Cluster randomized trials may involve randomization at different levels including the full hospital or individual hospital units. These trials are complicated, costly, and time-consuming but are absolutely vital if populationlevel interventions are to be adequately evaluated. In health care epidemiology, clinicians and infection control personnel frequently are compelled to act quickly due to patient safety concerns (eg, an outbreak). Using a cluster randomized trial approach in this setting is both infeasible and often unethical. It is also of interest to study changes that occur outside the control of the investigator (eg, legislative mandates). Assessing the potential effects of these broader initiatives is critical for building the evidence base for health care epidemiology. In these instances, a cluster randomized trial approach also would not be feasible. An alternative design to RCTs and cluster RCTs is the quasiexperimental (QE) study (sometimes called a “before-after” study). Quasi-experimental studies aim to evaluate interventions but do not use a randomized control group. In the simplest QE design, a population serves as its own control during a baseline period of observation. An intervention is then implemented, and a subsequent period of observation is completed. Changes in the outcome of interest are then compared before and after the time of the intervention. The role of the QE design in the investigation of infectionprevention interventions has been described. If not properly designed, QE studies will have several threats to internal validity including uncontrolled confounding and selection bias. Several approaches may be used in QE studies to address these limitations, including assessment of outcomes during a prolonged baseline period, use of nonequivalent