Abstract

This paper introduces a novel Deep Learning (DL) architecture for inferring temperature information from aerial true-color RGB images by transforming them into Infrared Radiation (IR) domain. This work is motivated by a few facts. First, off-the-shelf contemporary drones are typically equipped only with regular cameras. Second, IR heat-mapping cameras are costly and heavy for payload-limited drones. Third, additional communication channels and power supply would be needed when including IR cameras. Finally, IR cameras provide lower resolution and shorter distance ranges than RGB cameras. Therefore, learning-based translation of aerial IR recordings to RGB images can be extremely useful not only for new tests but also for offline processing of the currently available forest fire datasets with RGB images. We offer an Improved Conditional-Generative Adversarial Network (IC-GAN), where matched IR images are used as a condition to guide the translation process by the generator. The U-Net-based generator is concatenated with a mapper module to transform the output into a stack of diverse color spaces with learnable parameters. To avoid the unnecessary penalization of pixel-level disparities and achieve structural similarity, we include clustering alignment to the loss function. The proposed framework is compared against several state-of-the-art methods, including U-Net, Efficient U-Net, GAN, and Conditional-GAN from both subjective (human perception) and objective evaluation perspectives. The results support our method’s efficacy, demonstrating a significant improvement of around 6% in PSNR, 15% in UQI, 9% in SSIM, and 23% in IoU metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call