Jonathan L. Mezrich, MD, JD, MBA, LLM, has proposed some interesting ideas, one in particular that all of us should think about seriously. Radiologists in general do not pay enough attention to trying to uncover their mistakes and learn from them. On a typical working day, a radiologist might read several dozen cases, maybe even more than 100. Many are routine normal studies or “no changes,” but there are bound to be some that have interesting, perplexing, or unexpected findings. We read these cases, transmit our reports to the referring physicians, and maybe consult with them by phone or in person. But often, we never get clinical follow-up to see if our interpretations are correct. Any follow-up we do get is sporadic and informal. I would be willing to bet that there is no radiology department in the country, academic or private, that tracks every single challenging case to find out whether the radiologist’s interpretation was correct. That means that we are all missing out on opportunities for learning and quality improvement. At most, academic departments have occasional bland “quality assurance” (QA) or “missed case” conferences, at which anonymous cases are shown, identities are scrubbed, and no blame is attached. What to do about this? There are no easy answers. All of us are busy, working long hours, and already overburdened with bureaucratic and administrative demands. To track every challenging case and get definitive follow-up would be a huge logistic undertaking: time consuming and probably necessitating the hiring of several administrative assistants or radiologist assistants or adding additional work for the residents. It is not likely to happen in this era of belt tightening. Moreover, no one likes to publicly admit mistakes. It is easier and more discreet to sweep them under the rug and keep them private. Even good QA programs such as the ACR’s RADPEERTM do not necessarily require that definitive follow-up be obtained. Dr Mezrich, currently a resident in radiology, has used his background as an attorney and surgical resident to propose that radiologists adopt the more rigorous approach that has been used by surgeons for decades: the morbidity and mortality (M&M) conference. He describes M&M conferences as being more adversarial and confrontational than the typically passive QA or interesting case conferences that are held in many academic radiology departments. Presumably all or most complications that occur on a surgical service are discussed during these M&M conferences. One problem with trying to use this approach in radiology has to do with the sheer volume of cases. A typical general surgery service might perform several dozen operations per week. Everyone gets to know who the operating surgeon is because that surgeon becomes involved with the patient for an extended period of time. Because of the nature of surgery, the final diagnosis and the occurrence of complications have become apparent by the end of the patient’s hospital stay. In contrast, a radiology service might
Read full abstract