Abstract

A wall-to-wall Earth Observation (EO) data is required, as recommended by the Intergovernmental Panel on Climate Change, private sector organizations and major development partners, to allow for the implementation of forest monitoring commitments, and also to monitor commodity-led deforestation. However, a major limitation associated with the use of optical EO data in the High Forest Zone of Ghana is the presence of persistent cloud cover and the spectral limitations of segregating agroforestry cocoa (AFC) from open canopy forest (OCF) cover. The aim of the study was to investigate the synergistic use of Sentinel-1 (S1) and Sentinel-2 (S2) EO data to produce a land use/land cover map which shows AFC and OCF as different land cover classes. It was hypothesized that, a hybrid method of spectral, radar and image objects will accurately segregate the different cocoa systems from forest and other land use classes. The research was conducted in the Juaboso-Bia REDD+ Hotspot Intervention Area in the cocoa-forest mosaic landscape within the High Forest Zone of Ghana. The S1 and S2 datasets were freely acquired for the periods January to March 2018. The S1 datasets were pre-processed from backscatter intensity values to the VV and VH bands. The S2 datasets were corrected for atmospheric effects, and cloud pixels were masked and filled using a temporal gap-filling method. Six vegetation indices (VIs) were extracted, and the Multiresolution Segmentation algorithm was used to derive image objects (IOs). The S2 bands, the six VIs, the S1 VV + VH data, and the IOs were stacked into 3 different multi-layer image dataset denoted with D (i.e. D1 = S2 + VIs; D2 = S2 + VIs + S1; and D3 = S2+VIs + S1+IOs). The three datasets were classified using Random Forest and 1228 training points. Overall accuracy (OA) and kappa (k) were calculated for the classification outcome using 615 independent validation points. McNemar's test (χ) was used to access the statistically significant difference between D1, D2 and D3. The results of the study show that, D3 significantly improved the overall classification output (OA = 89.76%, k = 0.877) compared to D1 (OA = 79.02%, k = 0.748; χ = 5.56, p-value = 0.018) and D2 (OA = 80.49%, k = 0.765; χ = 5.50, p-value = 0.019). Combining spectral pixels with image objects increases overall classification accuracy and specifically, the accuracy of isolating AFC from OCF. This research is significant because it will provide an improved decision support to government-led monitoring and the private sector's commitment to halt cocoa-driven deforestation. Furthermore, and most importantly, the map shows agroforestry cocoa separated from monoculture cocoa, which provides a tremendous boost to monitoring landscape-level improvements associated with the promotion and adoption of agroforestry in cocoa landscapes, as a climate-smart practice and also for monitoring various off-reserve landscape forest restoration activities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call