Abstract

It is a great challenge to rendering glare on image as the current rendering algorithms did not consider well the refraction of human eyes, thus the effect of rendering, in some critical application such as vehicle headlamps, is not real and may affect the safety evaluation. The traditional glare rendering algorithm relies on a large number of hand-designed wave optics processing operators, not only cannot complete the rendering work online in real time, but also cannot cope with the complex and changeable imaging conditions in reality. The mainstream generative adversarial network based algorithms in the field of image style translation are introduced to generate glare effect, which could be rendering online in a real time, however they still fail to render some effects such as detail distortion. In this work, we present a novel glare simulation generation method which is the first algorithm to apply a generative model based style transfer method to glare rendering. In a nutshell, a new method named Glare Generation Network is proposed to aggregate the benefits of content diversity and style consistency, which combines both paired and unpaired branch in a dual generative adversarial network. Our approach increase the structural similarity index measure by at least 0.039 on the custom darkroom vehicle headlamp dataset. We further show our method significantly improve the inference speed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call