Abstract

The juxtaposition of preparing an editorial for the October 2008 volume of Health Services Research (HSR) based on several studies about performance measures in healthcare organizations and participating in the hectic last weeks of Marie-Claire Rosenberg's thesis (ABF is her thesis supervisor) on nursing care and hospital performance (Rosenberg 2008) triggered a revelation. By coincidence, it was exactly 30 years ago that Ann Barry Flood published the first article from her thesis on hospital structure and performance (Flood and Scott 1978) and was a coauthor on the last major report on quality of care in US hospitals from the Stanford Center of Health Care Research (SCHSR) (Forrest et al. 1978). What has changed since the SCHSR's reports and how is the field advanced by current work in this issue? Thirty years ago, in follow-up to a provocative article that suggested that American hospitals varied dramatically in their quality of surgical care, SCHSR was among the first to use detailed risk adjustments based on patient level overall health and treatment to “adjust” for outcomes in order to assess hospital-level performance. The basic design was an overlapping study with relatively scant information available about a large sample of 1,224 hospitals (using the American Hospital Association's [AHA] annual survey) and some 600,000 of their surgical patients (using computerized abstracts from medical charts) in contrast to a detailed prospective study of 10,000 surgical patients followed in 17 of these hospitals where SCHSR collected seven forms about each patient's outcomes and postoperative treatment and interviewed about 80 people and surveyed several hundred physicians and nurses at these hospitals. The similarities to today's work? The Center's work focused on outcomes as the “ultimate” indicator of quality, in part because there was not much evidence that process measures led to substantial improvements in health—only for a handful of measures. A chief concern about using outcomes was how to reliably measure them and adjust for risk factors, because those outcomes that were most reliably measured (death) tended to also be rare. Organizational factors studies tended to focus on the importance of physicians, both as the surgeons in charge of individual patients and as medical staff with responsibility for peer review and professional oversight. Not surprisingly, we too found important variation in quality in hospitals—variation in outcomes that remained after stringent attempts to adjust for patient risk factors in both the large-crude study and small-intensive study. Two “surprises” we found were in regard to organizational factors related to better quality: (1) The more explicit were the policies and procedures for the nursing staff, the better were the outcomes. (This finding seemed to contradict the professional model that implied that flexibility and judgment should be left to individuals; instead, we found that rigid rules and regulations—such as those that promote safety and prevent equipment failure or loss—were important.) (2) Hospital-based factors explained more of the variation in adjusted outcomes than did surgeon experience and training. (This finding was examined very carefully by the physicians on the team but held in test after test—including the association between larger volume of cases of a particular type of surgery and better outcomes.) What did we “forget” to look for 30 years ago? No one looked at the impact of system membership or vertical integration—these were not even questions on the AHA survey at that time nor in organization theory about non-profit service sectors; we also paid no attention to health maintenance organizations (there were not many and almost all charges were on a fee-for-service basis); market penetration or environmental influences (one hospital administrator noted that the only reason he paid attention to other hospitals is that otherwise he had no idea what to charge); and quality improvement—newly re-introduced to businesses in the United States—was thought not to apply to healthcare. Other differences: In order to run our logistic regressions to adjust for patient risk factors, we shipped our programming code across the country to one of the biggest mainframe computers, which took almost 48 hours to converge on five regressions involving 600,000 patients. We were also limited to the approximately 1/5 of the nation's hospitals that participated in an electronic system of patient chart abstracts. Neither claims nor medical records nor most inventory records were computerized, so that a practical way of monitoring hospitals or paying for better performance was unimaginable. Turning to the studies in this issue, five are of particular interest to this story. They continue the quest to measure quality and hold hospitals accountable for it; they look at measures reflecting nursing care in hospitals; and they look for practical ways well-intentioned managers and providers can improve the quality their organizations provide.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.