Abstract

Forest ecosystems play a fundamental role in natural balances and climate mechanisms through their contribution to global carbon storage. Their sustainable management and conservation is crucial in the current context of global warming and biodiversity conservation. To tackle such challenges, Earth observation data has been identified as a valuable source of information capable to provide stakeholders with informative indicators to support the decision making process related to forest ecosystems management. While Earth observation data constitutes an unprecedented opportunity to monitor forest ecosystems, its effective exploitation still poses serious challenges since multi-modal (i.e. multi-scale and multi-source) information need to be combined in order to describe complex natural phenomena. To deal with this particular issue in the context of structure and biophysical variables estimation for forest characterization, we propose a new deep learning based fusion strategy to combine together high density 3D-point clouds acquired by airborne laser scanning (ALS) with high resolution optical imagery freely accessible via the Sentinel-2 mission. In order to manage and fully exploit the available multi-modal information, we implement a two-branch late fusion deep learning architecture taking advantage of the specificity of each modality: on the one hand, a 2D-CNN branch is devoted to the analysis of Sentinel-2 time series data and, on the other hand, a Multi-Layer Perceptron branch is dedicated to the processing of LiDAR-derived information. The whole framework is learnt end-to-end in order to effectively exploit the complementarity between the two sources of information.The performance of our framework is evaluated on two forest variables of interest: total volume and basal area at stand level. The obtained results underline that the availability of multi-modal remote sensing data is not a direct synonym of performance improvements but, the way in which they are combined together is of paramount importance to leverage the complex interplay among the different input sources.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call