Abstract

Accurate and consistent broad-scale mapping of fire severity is an important resource for fire management as well as fire-related ecological and climate change research. Remote sensing and machine learning approaches present an opportunity to enhance accuracy and efficiency of current practices. Quantitative biophysical models of photosynthetic, non-photosynthetic and bare cover fractions have not been widely applied to fire severity studies but may provide greater consistency in comparisons of different fires across the landscape compared to reflectance-based indices. We systematically tested and compared reflectance and fractional cover candidate severity indices derived from Sentinel 2 satellite imagery using a random forest (RF) machine learning framework. Assessment of predictive power (cross-validation) was undertaken to quantify the accuracy of mapping severity of new fires. The effect of environmental variables on the accuracy of the RF predicted severity classification was examined to assess the stability of the mapping across the landscape. The results indicate that fire severity can be mapped with very high accuracy using Sentinel 2 imagery and RF supervised classification. The mean accuracy was >95% for the unburnt and extreme severity class (complete crown consumption), >85% for high severity class (full crown scorch), >80% for low severity (burnt understory, unburnt canopy) and >70% for the moderate severity class (partial canopy scorch). Higher canopy cover and higher topographic complexity was associated with a higher rate of under-prediction, due to the limitations of optical sensors in viewing the burnt understorey of low severity classes under these conditions. Further research is aimed at improving classification accuracy of low and moderate severity classes and applying the RF algorithm to hazard reduction fires.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call