Abstract

Colorization of a thermal image into realistic picture is a challenging task. Thermal cameras can detect objects in environments when human vision cannot (in case of fog, rains, dark surrounding, etc). It is difficult to analyse the thermal image and to identify potential impediments simply with human vision. In industries working on autonomous vehicles, drones, etc., the use of thermal cameras to capture the images for any kind of surveillance is very beneficial and in such use, image colorization of thermal pictures is very crucial. The research presents a translation method with enhancement to convert thermal infrared to visual colour picture by utilising a custom-tailored architecture for a Convolutional Neural Network. The pedestrian detection system is designed that provides Image colorization, Image enhancement and object detection functionalities. The colorized and enhanced images are fed to the detection model by using a pre-trained YOLOv5 (You Only Look Once) architecture. Bounding boxes are drawn on the resultant images that refer to the border's coordinates enclosing the pedestrians. The proposed models are trained on a publicly accessible CAMEL thermal dataset. Such integrated model has achieved test accuracy of 92.1% for pedestrian detection. The performance of the colorization and deblurring model are quantitatively evaluated using metrics such as Root Mean Squared Error (RMSE), Structural/feature Similarity Indexes, Peak signal to noise ratio (PSNR). The performance is further confirmed with qualitative assessment by performing apparent comparison of resultant images with ground truth. Higher value up to 84 and lower value up to 0.006 are obtained for PSNR and RMSE respectively, that is evident for reasonable similarity between ground truth and resultant colorized, enhanced image.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call