Abstract

Forest stands are a basic unit of analysis for forest inventory and mapping. Stands are defined as large forested areas of homogeneous tree species composition and age. Their accurate delineation is usually performed by human operators through visual analysis of very high resolution (VHR) infra-red and visible images. This task is tedious, highly time consuming, and needs to be automated for scalability and efficient updating purposes. The most appropriate fusion of two remote sensing modalities (lidar and multispectral images) is investigated here. The multispectral images give information about the tree species while 3D lidar point clouds provide geometric information. The fusion is operated at three different levels within a semantic segmentation workflow: over-segmentation, classification, and regularization. Results show that over-segmentation can be performed either on lidar or optical images without performance loss or gain, whereas fusion is mandatory for efficient semantic segmentation. Eventually, the fusion strategy dictates the composition and nature of the forest stands, assessing the high versatility of our approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.