Abstract

In unmanned aerial vehicle based urban observation and monitoring, the performance of computer vision algorithms is inevitably limited by the low illumination and light pollution caused degradation, therefore, the application image enhancement is a considerable prerequisite for the performance of subsequent image processing algorithms. Therefore, we proposed a deep learning and generative adversarial network based model for UAV low illumination image enhancement, named LighterGAN. The design of LighterGAN refers to the CycleGAN model with two improvements—attention mechanism and semantic consistency loss—having been proposed to the original structure. Additionally, an unpaired dataset that was captured by urban UAV aerial photography has been used to train this unsupervised learning model. Furthermore, in order to explore the advantages of the improvements, both the performance in the illumination enhancement task and the generalization ability improvement of LighterGAN were proven in the comparative experiments combining subjective and objective evaluations. In the experiments with five cutting edge image enhancement algorithms, in the test set, LighterGAN achieved the best results in both visual perception and PIQE (perception based image quality evaluator, a MATLAB build-in function, the lower the score, the higher the image quality) score of enhanced images, scores were 4.91 and 11.75 respectively, better than EnlightenGAN the state-of-the-art. In the enhancement of low illumination sub-dataset Y (containing 2000 images), LighterGAN also achieved the lowest PIQE score of 12.37, 2.85 points lower than second place. Moreover, compared with the CycleGAN, the improvement of generalization ability was also demonstrated. In the test set generated images, LighterGAN was 6.66 percent higher than CycleGAN in subjective authenticity assessment and 3.84 lower in PIQE score, meanwhile, in the whole dataset generated images, the PIQE score of LighterGAN is 11.67, 4.86 lower than CycleGAN.

Highlights

  • This article is an open access articleWith the development of unmanned aerial vehicle (UAV) technologies, more important and complex tasks are performed by low or ultra-low altitude UAVs which embedded powerful functions, especially in the field of urban remote sensing [1]

  • In the test set images comparative experiments, LighterGAN has achieved the best results in visual perception subjective judgment and no-reference image quality assessment (NR-IQA) objective quantitative judgment, the results could illustrate that perception based image quality evaluator (PIQE) could objectively evaluate the performance of the illumination enhancement algorithms

  • Because LighterGAN used unsupervised learning and an unpaired dataset, it could be found from the test images that the model did not show the phenomenon of model collapse and overfitting, there were no unique correct results for the enhancements in the low illumination sub-dataset Y, the result of each low illumination image in the sub-dataset was in an unknown state before being enhanced

Read more

Summary

Enhancement Method for Urban UAV

Matrix Mathematical Imaging Center, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China. Lingnan Guangdong Laboratory of Modern Agriculture, South China Agricultural University, Guangzhou 510642, China

Introduction
Related Works
Generative Adversarial Network
Unpaired Dataset
Normalizations
Network Structure
Autoencoder
Encoder
Decoder
Attention Mechanism
Model Outline
Adversarial Loss
Cycle-Consistency Loss
Semantic Consistency Loss and Model Overall Loss
Training Details
Visual Subjective Evaluation and NR-IQA
NR-IQA Evaluation of Sub-Dataset Y
Comparison of Generalization Ability with CycleGAN
Discussions
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.