Abstract

Context enhancement is critical for the environmental perception in night vision applications, especially for the dark night situation without sufficient illumination. In this article, we propose a thermal image translation method, which can translate thermal/infrared (IR) images into color visible (VI) images, called IR2VI. The IR2VI consists of two cascaded steps: translation from nighttime thermal IR images to gray-scale visible images (GVI), which is called IR-GVI; and the translation from GVI to color visible images (CVI), which is known as GVI-CVI in this article. For the first step, we develop the Texture-Net, a novel unsupervised image translation neural network based on generative adversarial networks. Texture-Net can learn the intrinsic characteristics from the GVI and integrate them into the IR image. In comparison with the state-of-the-art unsupervised image translation methods, the proposed Texture-Net is able to address some common challenges, e.g., incorrect mapping and lack of fine details, with a structure connection module and a region-of-interest focal loss. For the second step, we investigated the state-of-the-art gray-scale image colorization methods and integrate the deep convolutional neural network into the IR2VI framework. The results of the comprehensive evaluation experiments demonstrate the effectiveness of the proposed IR2VI image translation method. This solution will contribute to the environmental perception and understanding in varied night vision applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call