Mapping tree crowns in arid or semi-arid areas, which cover around one-third of the Earth’s land surface, is a key methodology towards sustainable management of trees. Recent advances in deep learning have shown promising results for tree crown segmentation. However, a large amount of manually labeled data is still required. We here propose a novel method to delineate tree crowns from high resolution satellite imagery using deep learning trained with automatically generated labels from 3D radiative transfer modeling, intending to reduce human annotation significantly. The methodological steps consist of 1) simulating images with a 3D radiative transfer model, 2) image style transfer learning based on generative adversarial network (GAN) and 3) tree crown segmentation using U-net segmentation model. The delineation performances of the proposed method have been evaluated on a manually annotated dataset consisting of more than 40,000 tree crowns. Our approach, which relies solely on synthetic images, demonstrates high segmentation accuracy, with an F1 score exceeding 0.77 and an Intersection over Union (IoU) above 0.64. Particularly, it achieves impressive accuracy in extracting crown areas (r2 greater than 0.87) and crown densities (r2 greater than 0.72), comparable to that of a trained dataset with human annotations only. In this study, we demonstrated that the integration of a 3D radiative transfer model and GANs for automatically generating training labels can achieve performances comparable to human labeling, and can significantly reduce the time needed for manual labeling in remote sensing segmentation applications.
Read full abstract