Abstract

Attitude jitter of satellites and unmanned aerial vehicle (UAV) platforms is a problem that degenerates the imaging quality in high-resolution remote sensing. This letter proposes a deep learning architecture that automatically learns essential scene features from a single image to estimate the attitude jitter, which is used to compensate deformed images. The proposed methodology consists of a convolutional neural network and a jitter compensation model. The neural network analyzes the deformed images and generates the attitude jitter vectors in two directions, which are utilized to correct the images through interpolation and resampling. The PatternNet and the small UAV data sets are introduced to train the neural network and to validate its effectiveness and accuracy. The compensation results on distorted remote sensing images obtained by satellites and UAVs reveal that the image distortion due to attitude jitter is clearly reduced and that the geometric quality is effectively improved. Compared to the existing methods that primarily rely on sensor data or parallax observation, no auxiliary information is required in our framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call