Abstract

With the rapid development of visual sensors and artificial intelligence (AI), video/image data has increased dramatically, especially in the era of AI-enabled intelligent transportation. In low-light imaging conditions, however, the camera only captures weak scene-reflected light. The visual data is thus inevitably affected by noise, low contrast, and poor brightness, and so on. It will have a negative influence on the development of vision-based traffic situational awareness, traffic safety management, and automatic/autonomous vehicles. To guarantee high-quality visual data, the multiscale deep stacking fusion enhancer (termed MDSFE) is proposed to enhance a low-light image. In particular, our MDSFE consists of four components, i.e., coarse extraction module (C-EM), coarse attention fusion module (C-AFM), multiscale dense enhancement module (M-DEM), and fine encoder–decoder fusion module (F-EDFM). The combination of these modules is capable of enhancing the abilities of feature mapping and expression. Experimental results on both synthetic and real-world scenarios have illustrated that the proposed method can provide superior enhancement results under different imaging conditions. It also has the capacity of improving the detection precision under low-light conditions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call