Abstract

Near real-time (NRT) monitoring of land disturbances holds great importance for delivering emergency aid, mitigating negative social and ecological impacts, and distributing resources for disaster recovery. Many past NRT techniques were built upon examining the overall change magnitude of a spectral anomaly with a predefined threshold, namely the unsupervised approach. However, their lack of fully considering spectral change direction, change date, and pre-disturbance conditions often led to low detection sensitivity and high commission errors, especially when only a few satellite observations were available at the early disturbance stage, eventually resulting in a longer lag to produce a reliable disturbance map. For this study, we developed a novel supervised machine learning approach guided by historical disturbance datasets to accelerate land disturbance monitoring. This new approach consisted of two phases. For the first phase, the supervised approach applied retrospective analysis on historical Harmonized Landsat Sentinel-2 (HLS) datasets from 2015 to 2021, combined with several open disturbance products. The disturbance model was constructed for each condition of consecutive anomaly number, with the aim of enhancing the specificity for delineating early-stage disturbance regions. Then, these stage-based models were applied for an NRT scenario to predict disturbance probabilities with 2022 HLS images incrementally on a weekly basis. To demonstrate the capability of this new approach, we developed an operational NRT system incorporating both the unsupervised and supervised approach. Latency and accuracy were evaluated against 3000 samples that were randomly selected from the five most influential disturbance events of the United States in 2022, based on labels and disturbance dates interpreted from daily PlanetScope images. The evaluation showed that the supervised approach required 15 days (since the start of the disturbance event) to reach the plateau of its F1 curve (where most disturbance pixels are detected with high confidence), seven days earlier with roughly 0.2 F1 score improvement compared to the unsupervised approach (0.733 vs. 0.546 F1 score). Further analysis showed the improvement was mainly due to the substantial decrease in commission errors (17.7% vs 44.4%). The latency component analysis illustrated that the supervised approach only took an average of 4.1 days to yield the first disturbance alert at its fastest daily updating speed, owing to its decreased sensitivity lag. This finding highlighted the importance of using past knowledge and machine learning to reduce detection delays for an NRT monitoring task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call