The major challenge in high dynamic range (HDR) imaging for dynamic scenes is suppressing ghosting artifacts caused by large object motions or poor exposures. Whereas recent deep learning-based approaches have shown significant synthesis performance, interpretation and analysis of their behaviors are difficult and their performance is affected by the diversity of training data. In contrast, traditional model-based approaches yield inferior synthesis performance to learning-based algorithms despite their theoretical thoroughness. In this paper, we propose an algorithm unrolling approach to ghost-free HDR image synthesis algorithm that unrolls an iterative low-rank tensor completion algorithm into deep neural networks to take advantage of the merits of both learning- and model-based approaches while overcoming their weaknesses. First, we formulate ghost-free HDR image synthesis as a low-rank tensor completion problem by assuming the low-rank structure of the tensor constructed from low dynamic range (LDR) images and linear dependency among LDR images. We also define two regularization functions to compensate for modeling inaccuracy by extracting hidden model information. Then, we solve the problem efficiently using an iterative optimization algorithm by reformulating it into a series of subproblems. Finally, we unroll the iterative algorithm into a series of blocks corresponding to each iteration, in which the optimization variables are updated by rigorous closed-form solutions and the regularizers are updated by learned deep neural networks. Experimental results on different datasets show that the proposed algorithm provides better HDR image synthesis performance with superior robustness compared with state-of-the-art algorithms, while using significantly fewer training samples.
Read full abstract