Abstract

Detecting change-points and trends are common tasks in the analysis of remote sensing data. Over the years, many different methods have been proposed for those purposes, including (modified) Mann–Kendall and Cox–Stuart tests for detecting trends; and Pettitt, Buishand range, Buishand U, standard normal homogeneity (Snh), Meanvar, structure change (Strucchange), breaks for additive season and trend (BFAST), and hierarchical divisive (E.divisive) for detecting change-points. In this paper, we describe a simulation study based on including different artificial, abrupt changes at different time-periods of image time series to assess the performances of such methods. The power of the test, type I error probability, and mean absolute error (MAE) were used as performance criteria, although MAE was only calculated for change-point detection methods. The study reveals that if the magnitude of change (or trend slope) is high, and/or the change does not occur in the first or last time-periods, the methods generally have a high power and a low MAE. However, in the presence of temporal autocorrelation, MAE raises, and the probability of introducing false positives increases noticeably. The modified versions of the Mann–Kendall method for autocorrelated data reduce/moderate its type I error probability, but this reduction comes with an important power diminution. In conclusion, taking a trade-off between the power of the test and type I error probability, we conclude that the original Mann–Kendall test is generally the preferable choice. Although Mann–Kendall is not able to identify the time-period of abrupt changes, it is more reliable than other methods when detecting the existence of such changes. Finally, we look for trend/change-points in land surface temperature (LST), day and night, via monthly MODIS images in Navarre, Spain, from January 2001 to December 2018.

Highlights

  • Time-ordered satellite images are a sequence of remote sensing data during a period of time over a region of interest

  • It is not obvious to distinguish an inherent trend from a trend caused by an abrupt change

  • In case a trend is caused by an abrupt change, trend detection methods are not able to point out the time-period of such change

Read more

Summary

Introduction

Time-ordered satellite images are a sequence of remote sensing data during a period of time over a region of interest. There are other methods, based on different techniques, for detecting distributional changes in mean and/or variance. In the presence of temporal autocorrelation, they show a high probability of introducing false trend/change-points [30,31,32,33,34,35] This probability, called type I error probability, varies among methods. Based upon the Mann–Kendall method, several techniques are proposed for trend detection when dealing with temporally autocorrelated data [30,31,32,33,34] The aim of these proposals is to reduce the type I error probability of Mann–Kendall in the presence of autocorrelation, but this reduction comes with an important power diminution

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.