Abstract

Example-based image relighting aims to relight an input image to follow the lighting settings of another target example image. Deep learning-based methods for such tasks have become highly popular. However, they are often limited by the geometric priors or suffer from shadow reconstruction and lack of texture details. In this paper, we propose an image-to-image translation network called DGATRN to tackle this problem by enhancing feature extraction and unveiling context information to achieve visually plausible example-based image relighting. Specifically, the proposed DGATRN consists of a scene extraction, a shadow calibration, and a rendering network, and our key contribution lies in the first two networks. We propose an up- and downsampling approach to improve the feature extraction capability to capture scene and texture details better. We also introduce a feature attention downsampling block and a knowledge transfer to utilize the attention impact and underlying knowledge connection between scene and shadow. Experiments were conducted to evaluate the usefulness and effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call