Abstract

Many onboard vision tasks for spacecraft navigation require high-quality remote-sensing images with clearly decipherable features. However, design constraints and the operational and environmental conditions limit their quality. Enhancing images through postprocessing is a cost-efficient solution. Current deep learning methods that enhance low-resolution images through super-resolution do not quantify network uncertainty of predictions and are trained at a single scale, which hinders practical integration in image-acquisition pipelines. This work proposes performing multiscale super-resolution using a deep normalizing flow network for uncertainty-quantified and Monte Carlo estimates so that image enhancement for spacecraft vision tasks may be more robust and predictable. The proposed network architecture outperforms state-of-the-art super-resolution models on in-orbit lunar imagery data. Simulations demonstrate its viability on task-based evaluations for landmark identification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call