Increased ED waiting times, increased ED length of stay and their association with adverse events, including mortality, have refocused government policy on time targets. Time-based performance targets for emergency medicine were highlighted as a potentially useful strategy in the mid 1990s in Victoria with the Emergency Services Enhancement Program, when bonus payments were shown to improve patient access. In the UK, reform of the National Health Service (NHS) included implementation of the 4 h standard or target in 2004, with Western Australia (WA) and New Zealand following suit in 2009. The original intent was that 98% of patients would be seen, admitted, discharged or transferred from ED within 4 h. This was soon revised to 95% in the UK and 85% of presentations in WA. In contrast, the New Zealand Shorter Stays in EDs (SSEDs) target was set at ‘95% in six hours’, following advice from local clinicians and nurses, with the belief that 6 h was ‘long enough for good clinical care but not unjustifiably long’. Evaluation of the NHS targets suggests that they have been successful in reducing long waits in the ED. Significantly, UK emergency physicians describe the major benefit of the target as enforcement of a ‘whole of system’ change without focus on the ED performance in isolation. However, there are conflicting reports on associated clinical outcomes, with concerns about the potential for a fixation on meeting the time standards to the detriment of the quality of patient care. It is notable the new British government responded to these fears by modifying the standard in 2011 to include a suite of eight clinical quality indicators (CQIs) that were designed to focus on timeliness along with the quality of clinical care and the patient experience. In Australia, the impact of time targets on improving the quality of care is unclear, with the Stokes report from WA suggesting need for further study and further monitoring. Initial research has suggested a possible reduction in mortality since the introduction of the 4 h rule in WA; however, methodological issues, including the lack of long-term trends, limit any conclusions at this stage. Introduction of mandatory reporting of CQIs potentially results in a more balanced approach to patient care. As such, the danger of only achieving faster care should be reduced. CQIs need to encompass measures of clinical outcomes, clinical effectiveness, safety and service experience, as well as timeliness. In doing so, they align with the Institute of Medicine’s Quality Care framework of effectiveness, safety, timeliness, patient/ family centredness, access and efficiency. CQIs should also support a culture of continuous improvement in the quality of care; however, this can only be achieved with ongoing monitoring and regular audit, alongside feedback and implementation strategies – which is not explicit in the introduction of these indicators. In this issue of Emergency Medicine Australasia, Jones et al. report on a comprehensive process to develop a set of evidence-based and clinically and locally relevant quality of care indicators for New Zealand. Indicators were initially selected following review of the existing international literature with feedback from the New Zealand emergency healthcare community. This was followed by subsequent validation by a multidisciplinary reference group. The reference group was representative of the wider health system
Read full abstract