Abstract Reconstructed weather datasets, such as reanalyses based on model output with data assimilation, often show systematic biases in magnitude when compared with observations. Postprocessing approaches can help adjust the distribution so that the reconstructed data resemble the observed data as closely as possible. In this study, we have compared various statistical bias-correction approaches based on quantile–quantile matching to correct the data from the Twentieth Century Reanalysis, version 2c (20CRv2c), with observation-based data. Methods included in the comparison utilize a suite of different approaches: a linear model, a median-based approach, a nonparametric linear method, a spline-based method, and approaches that are based on the lognormal and Weibull distributions. These methods were applied to daily data in the Australian region for rainfall, maximum temperature, relative humidity, and wind speed. Note that these are the variables required to compute the forest fire danger index (FFDI), widely used in Australia to examine dangerous fire weather conditions. We have compared the relative errors and performances of each method across various locations in Australia and applied the approach with the lowest mean-absolute error across multiple variables to produce a reliable long-term bias-corrected FFDI dataset across Australia. The spline-based data correction was found to have some benefits relative to the other methods in better representing the mean FFDI values and the extremes from the observed records for many of the cases examined here. It is intended that this statistical bias-correction approach applied to long-term reanalysis data will help enable new insight on climatological variations in hazardous phenomena, including dangerous wildfires in Australia extending over the past century.