Abstract

The atmospheric particles and aerosols from burning usually cause visual artifacts in single images captured from fire scenarios. Most existing haze removal methods exploit the atmospheric scattering model (ASM) for visual enhancement, which inevitably leads to inaccurate estimation of the atmosphere light and transmission matrix of the smoky and hazy inputs. To solve these problems, we present a novel color-dense illumination adjustment network (CIANet) for joint recovery of transmission matrix, illumination intensity, and the dominant color of aerosols from a single image. Meanwhile, to improve the visual effects of the recovered images, the proposed CIANet jointly optimizes the transmission map, atmospheric optical value, the color of aerosol, and a preliminary recovered scene. Furthermore, we designed a reformulated ASM, called the aerosol scattering model (ESM), to smooth out the enhancement results while keeping the visual effects and the semantic information of different objects. Experimental results on both the proposed RFSIE and NTIRE’20 demonstrate our superior performance favorably against state-of-the-art dehazing methods regarding PSNR, SSIM and subjective visual quality. Furthermore, when concatenating CIANet with Faster R-CNN, we witness an improvement of the objection performance with a large margin.

Highlights

  • Accepted: 21 January 2022The phenomenon of images degradation from fire scenarios is usually caused by the large number of suspended particles generated during combustion

  • To address the above-mentioned problems, this paper proposed a novel color-dense illumination adjustment network (CIANet) that can effectively improve the haze images captured from fire scenarios

  • The CIANet proposed in this paper can effectively improve the visibility and clarity of the scene to promote the performance of other high-level visual tasks, which is the application significance of the algorithm proposed in this paper

Read more

Summary

Introduction

The phenomenon of images degradation from fire scenarios is usually caused by the large number of suspended particles generated during combustion. When executing the robot rescue in such scenes, the quality of the images collected from the fire scenarios will be seriously affected [1]. Burning is usually accompanied by uneven light and smoke, reducing the scene’s visibility and failing many high-level vision algorithms [2,3]. Removing haze and smoke from fire scenario scenes is very important to improve the detection performance for rescue robots and monitoring equipment. The brightness distribution in the fire scenarios is uneven, and different kinds of materials will produce different colors of smoke when burning [4]. The degradation of the images in fire scenarios is more variable than common hazy scenes

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call