Abstract

Benchmarking, not a new concept to clinical engineering (CE) managers, continues to be redefined and promoted as a “must have” by CE professionals. How we use benchmarking and how we should use benchmarking is the crux of this article. We need to prioritize and focus our resources in a way that matters most to our fellow caregivers and patients. There is a lot of valuable information to be gained from benchmarking articles offered by AAMI, ECRI Institute, and trade journals. And, as one might expect, there is a wide range of suggestions within the CE community as to which benchmarks are the most important, how to collect the data, and how various benchmarks can or should be used to support the growth and further development of a CE department. All too often, those of us responsible for managing CE departments fall into the trap of collecting and presenting data simply because we have always done it. This practice is truly a “sacred cow.” Why do we collect data? We do it to justify our existence, to prove our worth, and/or in an attempt to justify needed additional resources. We have been guilty of this for the past 30 years. We have always generated a yearend report for senior management in an attempt to wow them with all kinds of data: preventive maintenance (PM) compliance, response time, open vs. closed work orders, productivity, cost-of-service ratio, in-house vs. vendor repairs, etc., all nicely tabulated and formatted to identify trends. Impressive, no? Yet, can we really see significant, actionable outcomes when looking at year-to-year comparisons? Surely some metrics go up, some go done, others barely change. Then we compare our data by looking at similar metrics from other organizations. Interpretations of the results dissipate into concerns over data accuracy and integrity. After all, our CE profession can’t even agree on how to count devices (is it one monitor, or one with five modules); what purchase cost to use (list price vs. our negotiated top-secret price); or how to define a productive employee (captured work hours or available hours). With all these benchmarking variations and ambiguities of how and what to measure, is it any wonder that it has been challenging for CE professionals to determine how best to use them? Depending on how our numbers compare, we can look either better or worse than the CE program across town. Yet, both can have little or an unquantifiable impact on the quality of care and on the staff efficiencies of those we serve. Using our sacred cow benchmarks, we just don’t know.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call