Abstract

Authentic remote-sensing images suffer non-uniform complex distortions during acquisition, transmission, and storage. Clouds, light, and exposure also affect local quality. This paper constructs a usability-based subjective remote-sensing image dataset and gives a definition of usability for images with non-uniform distortion, where the image usability is determined by the weighted quality of image's blocks. It is difficult to extract the handcraft features from remote-sensing images with complex mixture distortion. Recently, convolutional neural network (CNN) has been introduced into blind quality assessment for images with uniform distortion, which includes feature learning and regression in one processing. In this paper, we first describe and systematically analyze the usability of remote-sensing images in detail. Then, we propose a remote-sensing image usability assessment (RSIUA) method based on a residual network by combining edge and texture maps. The score of remote-sensing image usability was obtained with the weighted averaging of the quality scores of all image blocks, and the weight of each image block was determined by its quality score. We compared the proposed method with three traditional image quality assessment methods, one CNN-based method for images with simulated distortion, and one scale-invariant feature transform-based RSIUA method. The linear correlation coefficient, Spearman's rank ordered correlation coefficient, and root-mean-squared error of experiments demonstrate that our method outperforms all five competitors. The experiments also reveal that the edge and texture maps can improve the performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call