Abstract

Thermal cameras can capture images even in low light conditions. However, humans cannot recognize human faces in thermal images. Translation of thermal images to visible domain is one solution to the problem of face recognition in thermal images. Most of the research works have proposed Generative Adversarial Networks (GANs) based solutions for thermal to visible image translation. However, GAN is a heavy network that consumes huge amount of resource for thermal to visible image translation. In this paper, we propose an encoder–decoder architecture for thermal to visible image translation of human faces. Since our proposed architecture is not based on GANs, it is lightweight. The proposed method works well for both disguised and non-disguised thermal facial images. Standard comparison parameters such as Peak Signal-to-noise Ratio (PSNR), Structural Similarity Index (SSIM), and Multiscale Structural Similarity Index (MS-SSIM) are used to evaluate the quality of the generated visible images with respect to the ground truth. It has been found that our proposed architecture outperforms the current state-of-the-art image translator architectures namely pix2pix, Cycle-GAN, modified thermal to visible GAN and Dual GAN by a considerable margin for both disguised as well as non-disguised dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.