Abstract

Various systems have been developed to process agricultural land data for better management of crop production. One such system is Cropland Data Layer (CDL), produced by the National Agricultural Statistics Service of the United States Department of Agriculture (USDA). The CDL has been widely used for training deep learning (DL) segmentation models. However, it contains various errors, such as salt-and-pepper noise, and must be refined before being used in DL training. In this study, we used two approaches to refine the CDL for DL segmentation of major crops from a time series of Sentinel-2 monthly composite images. Firstly, different confidence intervals of the confidence layer were used to refine the CDL. Secondly, several image filters were employed to improve data quality. The refined CDLs were then used as the ground-truth in DL segmentation training and evaluation. The results demonstrate that the CDL with +45% and +55% confidence intervals produced the best results, improving the accuracy of DL segmentation by approximately 1% compared to non-refined data. Additionally, filtering the CDL using the majority and expand–shrink filters yielded the best performance, enhancing the evaluation metrics by about 1.5%. The findings suggest that pre-filtering the CDL and selecting an effective confidence interval can significantly improve DL segmentation performance, contributing to more accurate and reliable agricultural monitoring.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.