Abstract Forecast verification is critical for continuous improvement in meteorological organizations. The Jive verification system was originally developed to assess the accuracy of public weather forecasts issued by the Australian Bureau of Meteorology. It started as a research project in 2015 and gradually evolved to be a Bureau operational verification system in 2022. The system includes daily verification dashboards for forecasters to visualize recent forecast performance and “Evidence Targeted Automation” dashboards for exploring the performance of competing forecast systems. Additionally, Jive includes a Jupyter Notebook server with the Jive Python library, which supports research experiments, case studies, and the development of new verification metrics and tools. This paper describes the Jive verification system and how it helped bring verification to the forefront at the Bureau of Meteorology, leading to more accurate, streamlined forecasts. Jive has provided evidence to support forecast automation decisions and has helped to understand the evolving role of meteorologists in the forecast process. It has given operational meteorologists tools for evaluating forecast processes, including identifying when and how manual interventions lead to superior predictions. Work on Jive led to new verification science, including novel metrics that are decision-focused, including diagnostics for extreme conditions. Jive also provided the Bureau with an enterprise-wide data analysis environment and has prompted a clarification of forecast definitions. These collective impacts have resulted in more accurate forecasts, ultimately benefiting society and building trust with forecast users. These positive outcomes highlight the importance of meteorological organizations investing in verification science and technology.
Read full abstract