Abstract

Deep learning-based multi-exposure HDR imaging methods achieve better performance than traditional methods in recovering image information and removing ghosting artifacts, but still produce images with severe ghosting artifacts when dealing with poorly exposed dynamic scenes. In this paper, we proposed a least square generative adversarial network (LSGAN) based HDR imaging algorithm for dynamic scenes. In the generator, we presented a global dilated residual dense block (GDRDB) as the basic component for feature fusion. The GDRDB is able to acquire both local and global information of the images, so that the network can both preserve image details and remove ghosting artifacts effectively. We use two stages to train the model. The first phase pre-trains the generator network by content loss, and the second phase adversarially trains the generators and discriminators by LSGAN loss. Extensive experiments on three benchmark datasets show that our approach outperforms most of state-of-the-art methods qualitatively and quantitatively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call