THE AMERICAN RECOVERY AND REINVESTMENT ACT will provide an unprecedented stimulus for translational and health services research. A $1.1 billion investment in comparative effectiveness research (CER) should produce a torrent of new information about the effectiveness of drugs, technologies, and interventions. For this to result in better, more cost-effective health care, better evidence is needed to address the translational gap between clinical studies and everyday practice. In essence, this is CER for implementation strategies (a type of CER seriously underrepresented in current discourse, but necessary to deliver on the Institute of Medicine’s goals for improved health care quality). Credit is due to the US Department of Health and Human Services for recognizing this need. Evaluation of the implementation and dissemination strategies for mainstream CER is embedded within several programs funded by the American Recovery and Reinvestment Act that share the broad objective of spreading CER findings widely (total funding: $97 million; Karen Migdail, director of media relations, Agency for Healthcare Research and Quality, written communication, November 2010). For this investment to yield a timely return, researchers will need to adopt an array of pragmatic and context-sensitive study designs to determine which strategies are best for implementing evidence-based practices locally and at scale. Traditional CER uses sophisticated data mining of longitudinal administrative databases to evaluate therapeutic and diagnostic interventions. These techniques could revolutionize the speed, population relevance, and context specificity of translational research. However, most databases currently do not include the rich clinical or organizational information needed to take the critical next step of comparing implementation strategies. Expansion of the electronic medical record and improvements in interoperability will help, especially if captured data include key information about organizational structures, clinical processes, risk factors and comorbidities, the true costs of diagnosis and treatment, and important outcomes such as functional status and quality of life. Obtaining and recording this information will require closer collaboration among clinicians, health services researchers, systems improvement experts, and health information technologists. In the meantime, new knowledge of strategies to implement the findings of CER is likely to come mainly from prospective studies. Randomized controlled trials (RCTs) are the criterion standard of clinical research, and designs such as cluster randomization are useful for implementation research in which the unit of analysis is a clinician, microsystem, organization, or community. However, such RCTs tend to be slow, expensive, and insensitive to the heterogeneous contexts in which their output will be deployed. Their restrictive entry criteria limit the rate of recruitment and generalizability of results, and their stringent protocols do not easily accommodate emerging new knowledge about the interventions themselves or changes in the environment in which they are being applied. Given these limitations, it is not surprising that RCTs have been criticized for failing to predict real-world outcomes. The pragmatic clinical trial attempts to address the limitations of conventional RCTs by seeking to reproduce conditions that the intervention will encounter in the real world. The entry criteria of pragmatic clinical trials aim to reflect the full range of patients and clinicians who will be using an intervention. This approach is well suited to research into systems of health care delivery that cannot be separated easily from everyday clinical practice. Pragmatic study designs also can take into account the local adaptations and amendments that often occur when new strategies are introduced. Another flexible approach, the adaptive clinical trial, makes provision for planned, reactive changes as the trial progresses. This is especially important in evaluating the effects of a systems improvement intervention when observations reveal low compliance with care processes that are not the targets of the protocol but are linked to outcomes. In an adaptive trial, such unanticipated problems can be addressed in near real time by protocol adjustment. Bayesian data analysis can enhance efficiency and speed of adaptive