Abstract

Dear QA Q&A, As the medical director of a laboratory, my team and I are always looking for ways to improve the quality of our clinical services in a cost-efficient manner. We have identified several opportunities to improve operations and reduce patient safety risk, and we are interested to know how various quality improvement (QI) tools such as Six Sigma or Plan-Do-Study-Act (PDSA) might help us. None of us, however, have any formal training in the tools, and we have no budget to hire consultants. We also were recently told by hospital education leadership that our laboratory is not fulfilling the mandatory Accreditation Council for Graduate Medical Education (ACGME) QI 1 project for our pathology training program. Can you help us with a low-cost, high-yield, and easy-to-use QI tool for our practice and for our trainees? Dear reader, There is no single “best” framework for QI activities and each has advantages and disadvantages. The Six Sigma model reduces performance variation using a custom data-driven approach, but can be complex.2 Lean manufacturing QI concepts stress the importance of removing waste and improving productivity in daily workflows.3, 4 Total Quality Management focuses on a commitment to quality and process at all levels of the organization.5 For laboratories such as yours, and for trainees with little QI knowledge who are looking for a simple tool that is easily deployed and does not require industrial engineering expertise, we recommend the PDSA cycle, also known as the Model for Improvement. PDSA cycles can help your team put QI ideas into practice immediately. If your laboratory has an accredited cytopathology training program, involving trainees in QI projects will have the added benefit of fulfilling new ACGME QI regulatory requirements and allow the next generation of pathologists to acquire critical leadership skills. The PDSA cycle is a 4-stage, repeating process that evolved from the work of American statistician Edward W. Deming. His goal was to use the scientific method to improve business processes.6 Each letter of “P-D-S-A” stands for a critical phase in the cycle: “Plan, Do, Study, Act.” The cycle begins with developing a plan to test an improvement idea (Plan), followed by a small-scale experiment and data collection (Do). The team then observes and learns from the results (Study), and decides whether or not to roll out changes or make modifications by initiating a new cycle of improvement (Act) (fig. 1). To achieve incremental progress, the model was designed to quickly and efficiently pilot new ideas in a structured way using data and iterative cycles. This experimental learning process ensures that reliable conclusions are drawn regarding the effectiveness of each intervention.7 The PDSA cycle has been applied widely in health care because of its simplicity and practicality. It has been endorsed by the Institute for Healthcare Improvement.8 PDSA is flexible and can be used for nearly any QI challenge in the laboratory, from reducing the administrative workload through a redesign of the accessioning process to lowering the risk of patient mix-ups by implementing a paperless signout workflow. In this installment of QA Q&A focusing on the Model for Improvement, we will use a simple and common example of how PDSA cycles can improve cytology report signout turnaround time (TAT). The timeliness of finalized pathology results is a critical aspect of quality management for gynecological cytology specimens. For gynecological cases, our laboratory struggles with meeting a target of signing out final reports within 5 working days from specimen collection. Clinicians also have expressed concerns regarding unpredictable variations in TAT performance, with rates ranging from 30% to 60% of cases signed out within 5 days. Here, we will adopt a PDSA approach to improve TAT. Planning is the first and foremost step in a PDSA cycle and sufficient time and resources must be allocated to this stage. Thorough and careful design helps to prevent wasted PDSA cycles and also determines the final quality and success of the QI project. Identifying and recruiting team members with subject matter knowledge of the problem or opportunity for improvement is a critical first step. Making sustainable improvements requires a collaborative team effort. A multidisciplinary team brings in different expertise and perspectives that are critical to effective change. This helps to break down workflow siloes and facilitates outsidethebox thinking for improvement ideas. Frontline employees should be included because they are closest to the work and understand vital details regarding the specific challenges at hand. Engaging frontline staff also can be beneficial for later stages of the PDSA cycle when change management is required. The pilot implementation (“Do”) phase will be more successful if frontline staff are engaged in early planning (fig. 2). For this project, we assembled a QI team composed of the cytology laboratory medical director, local operational leadership (cytology supervisor), and frontline employees (cytotechnologists and laboratory assistants). Before generating improvement ideas, a concise problem statement with clear goals must be delineated.3 This step helps the PDSA team to stay focused on problems without overextending their scope and set specific, achievable, and time-bound targets. An example using SMART goals would be “Within 6 months, we plan to improve gynecologic cytology specimen TAT to a target of 75% of cases finalized within 5 days of specimen collection.” An example without SMART goals would be “We plan to improve gynecologic cytology specimen TAT, which has been a key problem and source of complaints by treating clinicians.” Using SMART goals builds in feasibility. For example, the SMART system would prevent one from pursuing a project that requires that a new hospital information technology system be purchased. The SMART system ensures that goals are relevant and quantifiable, with specific timelines specified up front. The effectiveness of your intervention can be assessed only if you collect data.3 There are 3 different types of measurements in PDSA cycles that might be appropriate. Outcome measures are measures that can demonstrate sustainable improvement in the final goal. An example would be “For this project, we will monitor the total number of accessioned gynecological cytology cases every week, and the number of cases that are signed out within 5 working days from specimen collection date, to determine the percentage of cases falling within our QI target.” Process measures are measures associated with an individual improvement idea. They enable the team to understand whether the change itself has been carried out as planned. An example would be “Using our project as an example, we identified that a large number of gynecological specimens are not logged into our laboratory information system on the day the specimen is received. We believe that a reduction in waiting time for accessioning will improve TAT. Therefore, we will measure percentages of same-day login before and after the change.” Process measurements help you to understand your improvement effort and decipher, if the PDSA cycle fails to demonstrate the change you expected, whether poor adherence to the implementation plan contributed to the failure. Sometimes, changes to laboratory processes cause unexpected consequences in other areas of the laboratory. Balancing measures are used to assess these potentially negative impacts. An example would be “We are concerned that our PDSA study might result in gynecologic specimens receiving extra priority at the expense of other services. We will monitor turnaround performance for non-gynecological cases while our PDSA project is in place, to determine if these cases were affected by the change.” (fig. 3). Although the PDSA framework encourages bold and simple ideas, the key principle of PDSA cycles is to test a feasible idea rapidly on a small scale and discover whether it leads to incremental improvement.3 Therefore, large, system-wide projects should be divided into small tasks and tested using multiple PDSA cycles before they are implemented globally. Root cause analysis is a technique that can be used to identify effective improvement ideas. Root cause analysis exposes the underlying causes of a complex problem, and therefore can help to improve the efficiency of deploying PDSA cycles. Your hospital's quality department may have an individual trained in root cause analysis available to help you with this technique. An example would be “After completing detailed process mapping and root cause analysis, we identified 2 targets for improvement. The first is a capacity bottleneck in accessioning gynecological specimens, which we addressed by increasing the amount of by laboratory assistants available to log in specimens by 0.25 FTE (PDSA cycle 1). The second challenge is the performance variation among cytotechnologists and cytopathologists. We aimed to minimize variation by collecting TAT data regarding individual cytotechnologists (PDSA cycle 2) and cytopathologists (PDSA cycle 3), and providing them with confidential, personalized feedback. The major tasks in this stage are: 1) measure baseline data; 2) pilot the improvement ideas; and 3) observe and collect follow-up data. It is critical to collect baseline data prior to implementing any changes. Such data not only confirm the need for the QI initiative but also allow you to evaluate the effectiveness of your intervention(s) through comparison of preimplementation and postimplementation results. In addition to gathering quantitative data, you should consider collecting qualitative feedback by observing processes and interviewing participants. As you watch what happens during the experiment period, you will be able to document how the people involved react to the changes, problems raised, and/or unexpected effects. Qualitative feedback can help you adjust existing plans and can lead to new improvement ideas. Once the hard work of implementing pilot changes and measuring their impact is complete, you may have hundreds or thousands of data points. How to best analyze these data and draw meaningful conclusions is the key at this stage. Instead of poring over numerical spreadsheets and summary reports, consider visualizing the data using charts and graphics. Typically, this is a more efficient way to evaluate outcomes. Run charts display observed data over time and are the most frequently used graphic in QI. Run charts make trends or patterns over a specified period of time easy to identify. Comparisons between different PDSA cycles make it easy to recognize effective change.10, 11 Other data visualization techniques such as control charts for detecting process stability and reliability and Pareto charts for identifying most frequent defects also can be adopted depending on the type of data you collect.12 An example follows. In the top run chart, the percentage of cases signed out within 5 days was plotted against the week. The PDSA cycles were indicated using text and different background shading. In the bottom run chart, the percentage of cases accessioned on the day they were received is plotted by week. The effectiveness of the interventions can be determined by comparing test results with the goals we drafted in stage 1. We may find that the change has been very successful, but it also is common to find that the results failed to meet expectations. The purpose is not to judge the PDSA cycle by assigning a binary “pass” or “fail.” In many cases in which the results did not attain preset goals, the change still might achieve some improvements when compared with the baseline. The process of evaluating the results is more about identifying trends or patterns and learning from what worked and what did not. An example would be “ In PDSA cycle 1, we found that by reallocating existing human resources (0.25 FTE) to the accessioning station in the afternoon, our weekly average success rate of meeting the TAT target improved from 51% to 69%. In PDSA cycle 2, we deployed weekly TAT reports in a dashboard that gave each cytotechnologist timely performance feedback. This helped to further boost our success rate to an average of 78%. In PDSA cycle 3, we implemented a similar dashboard turnaround report for pathologists to address variability in TAT among pathologists. PDSA cycle 3 did not result in any further improvements in TAT.” Based on reflection regarding the results, future actions can be summarized into the following 3 categories.13 When goals are achieved by the pilot implementation without causing unexpected problems, it may be appropriate to expand the program to a larger scale or spread it across your entire practice. At this stage, it also is important to consider plans to sustain the gains, or make even further improvements, through future PDSA cycles that contribute to “continuous improvement.” If the original plan failed to achieve the desired results or caused new problems, analyze the discrepancies to try to understand why. It is at this stage that qualitative data, including discussion with the participants, can be useful. If the failures can be corrected, consider modifying the plan and retesting through a new PDSA cycle. It is very common to have a failed PDSA cycle in the improvement journey. If your team believes a different approach would be more successful, consider abandoning the current intervention and starting a new cycle with a different plan. An example would be “We adopted and scaled up the idea of increasing capacity at the accessioning station. We reallocated a surgical pathology laboratory assistant (1 FTE) to the cytology laboratory after we observed positive results from PDSA cycle 1. The intervention was continuously effective, with the average TAT success rate remaining higher than 85%. Performance feedback for the cytotechnologist also appeared to improve the TAT and now is standard practice in our laboratory. A similar feedback mechanism for cytopathologists helped to address performance variation. However, it did not appear to improve the overall laboratory gynecologic cytology signout TAT further.” A summary of all PDSA cycle phases, key points, and corresponding details from the case study is provided in Table 1. SMART rules Using SMART rules: Without SMART rules: Type of measurements See Figure 2 The PDSA cycle is a simple QI tool that requires few resources, can be deployed rapidly, and can lead to meaningful change. Thorough planning is critical to success. To be successful, one must be able to measure either the quality and safety outcome desired, critical process metrics, or both. Qualitative data are important to collect because they can be invaluable in generating new hypotheses for process improvement. Data analysis is best performed using run charts and other validated visual QI tools. If your PDSA intervention is effective, laboratory policies should be formalized to reflect QI changes and new workflows. To ensure success in QI endeavors, quality leadership should plan to present the data collected and PDSA project ideas to as many involved personnel as possible regardless of rank or title. By its nature, QI is innovative, iterative, and collaborative and therefore both successes and failures are expected as the process evolves. A successful QI project using PDSA can improve patient outcomes, laboratory efficiency, and morale and engagement among the laboratory team. No specific funding was disclosed. The authors made no disclosures. Yigu Chen, MPH, PMP, is a quality and process improvement specialist in the Department of Pathology at Beth Israel Deaconess Medical Center. He has a keen interest in propelling value-driven improvements in patient safety and health care quality. He oversees quality and operations data analytics, leads root cause analyses of adverse events, and facilitates improvement initiatives in the laboratory and across laboratory-related services. Paul A. VanderLaan serves as the Director of Cytopathology and the Director of Thoracic Pathology at the Beth Israel Deaconess Medical Center and is an Associate Professor of Pathology at Harvard Medical School, both in Boston, Massachusetts. He has widely published in the fields of cytopathology and pulmonary pathology, sits on the editorial boards of multiple medical journals, and consistently has an eye out for optimizing specimen processing and ongoing quality improvement measures in the cytopathology laboratory. Yael K. Heher, MD, MPH, is an anatomic pathologist and the Director of Quality and Safety (Anatomic Pathology/Clinical Pathology) in the Department of Pathology at the Beth Israel Deaconess Medical Center and Harvard Medical School in Boston, Massachusetts. Her main focus is the role of quality improvement and patient safety in pathology, including performance and metric assessment, adverse event management, transparency, effective change, and leadership.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call