Abstract

ABSTRACT The high-precision navigation and positioning ability of UAV (Unmanned Aerial Vehicles) is the key factor to reflect its degree of automation. Visual navigation based on image matching has become one of the important research fields for UAV to realize autonomous navigation, because of its low cost, strong anti-jamming ability, and good location result. However, the visual quality of images captured by UAV will be seriously affected by some factors, such as weak illumination conditions or insufficient performance of its sensors. Resolving a series of degradation of low-light images can improve the visual quality and enhance the performance of UAV visual navigation. In this paper, we propose a novel fully convolutional network based on the Retinex theory to solve the degradations of low-light images captured by UAV, which can improve the visual quality of the images and visual matching performance effectively. At the same time, a visual navigation system is designed based on the proposed network. Extensive experiments demonstrate that our method outperforms the existing methods by a large margin both quantitatively and qualitatively, and effectively improves the performance of the image matching algorithms. The visual navigation system can successfully realize the self-localization of UAV under different illumination conditions. Moreover, we also prove that our method is also effective in other practical tasks (e.g. autonomous driving).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call