Abstract
Estimating depth from a single low-altitude aerial image captured by an Unmanned Aerial System (UAS) has become a recent research focus. This method has a wide range of applications in 3D modeling, digital terrain models, and target detection. Traditional 3D reconstruction requires multiple images, while UAV depth estimation can complete the task with just one image, thus having higher efficiency and lower cost. This study aims to use deep learning to estimate depth from a single UAS low-altitude remote sensing image. We propose a novel global and local mixed multi-scale feature enhancement network for monocular depth estimation in low-altitude remote sensing scenes, which exchanges information between feature maps of different scales during the forward process through convolutional operations while maintaining the maximum scale feature map. At the same time, we propose a Global Scene Attention (GSA) module in the decoder part of the depth network, which can better focus on object edges, distinguish foreground and background in the UAV field of view, and ultimately demonstrate excellent performance. Finally, we design several loss functions for the low-altitude remote sensing field to constrain the network to reach its optimal state. We conducted extensive experiments on public dataset UAVid 2020, and the results show that our method outperforms state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
More From: Remote Sensing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.