Quality cancer care does not depend only on research findings, treatment improvements, and practice guidelines, because they are all for naught unless they are converted into day-to-day practice by clinical oncologists. Although patients’ adherence to medical recommendations is much studied (see DiMatteo for a review of 50 years of such research), the investigation of physicians’ adoption of guidelines is significantly more limited. Unless there is such adherence, a guideline will not improve the quality of cancer care; to paraphrase Peter F. Drucker, the father of modern management, the best guideline is only a good intention unless it degenerates into clinical care. Indeed, Medicare is investigating how to incorporate performancebased payments based on guidelines into their reimbursement systems. A primary question then becomes: how well are such guidelines translated into ongoing clinical care? This question must be coupled with a second question: does the quality of care received by a cancer patient, as measured by practice guidelines and expert opinion, vary depending on where a patient receives care, suggesting that some practices are better at implementing guidelines than others? In this issue of the Journal of Clinical Oncology, Neuss et al attempt to answer both questions by presenting the Quality Oncology Practice Initiative (QOPI). This first-step initiative began by using snowball sampling to identify board-certified oncologists interested in design and measurement of practice quality. Using this group of oncologists, they developed yes/no quality measures based on expert opinions, practice guidelines, and Joint Commission on Accreditation of Healthcare Organizations–like questions regarding patient/physician interactions. These items were then administered at seven practices in two rounds using reviews of up to 85 sequentially selected charts per practice. Their findings clearly show that there are high variations among oncology practices. For example, the range for granulocyte colony-stimulating factors given per guideline was from 0% to 88%. This is particularly troublesome because most practices in the sample had active clinical research or quality improvement programs. Only three of the 11 measures were not statistically significantly different among practices at .10 level. Imagine what the differences would have been had the “typical” oncology practice been evaluated instead of those at the high end of compliance! Even more disturbing from a process-control standpoint is that there was no consistent improvement in use of QOPI quality indicators from round 1 to round 2. Although most measures had increased compliance (but not significantly), there were two statistically significant changes between rounds, and one of those was a decline (erythroid growth factors, from 72% to 60%)! The old saying in management that “you get what you measure” appears not to be the case in quality practices in some oncology practices. Although there are additional limitations to the research (such as no mention of inter-rater reliability, lack of representative samples, using charts instead of oncologists as the unit of analysis, not using all eight Institute of Medicine areas), there is some good news in the findings. First, and foremost, the research demonstrates that it is possible to do process quality benchmark studies, such as those found in other industries, in oncology practices. This is important because clinical oncologists can then translate the lessons found in other industries rather than delaying practice changes by attempting to learn only from their own practices or those only found in the medical arena. Second, the paper shows that process quality evaluation can be done in a rapid and cost-effective manner; the costs associated with the reviews were relatively inexpensive at just over $1,000 per practice for two rounds of chart abstractions. Finally, the research highlights that in even a small sample JOURNAL OF CLINICAL ONCOLOGY E D I T O R I A L VOLUME 23 NUMBER 25 SEPTEMBER 1 2005