Abstract

A learning-based image relighting framework is proposed for automatically changing the lighting conditions of facial images from one lighting source to another. Given only a 2D unseen facial testing image, the framework automatically infers the highlight or shadow areas in the relighted image in accordance with the specific facial characteristics of the input image using a learned non-parametric Markov random field model of the facial appearance correlation between the source and target lighting conditions. The proposed framework first decomposes the input image into its global and local components, where these components relate mainly to the lighting and detailed facial appearance characteristics of the image, respectively. The two components are then processed independently to ease the problem of insufficient training samples and to properly analyze the local contrast, overall lighting direction effects, and personal feature characteristics of the unseen subject. Specifically, the global and local components are processed by a lighting–aware classifier and a personal characteristic–aware classifier, respectively, in order to determine the semantic factors of the facial region. The semantic factors of the facial region are then used to update the Markov random field model of the facial appearance correlation between the source and target lighting conditions and to produce lighting enhancement matrices for the lighting components and facial characteristic components, respectively. Finally, the lighting enhancement matrices are applied to the original decomposition images, which are then integrated to obtain the final relighted result. The experimental results show that the proposed image relighting framework generates vivid and recognizable results despite the scarcity of training samples. Furthermore, the relighted results successfully simulate the individual lighting effects produced by the specific personal characteristics of the input image, such as the nose and cheek shadows. The effectiveness of the proposed framework is demonstrated by means of face verification tests, in which input images taken under side-lighting conditions are transformed to normal lighting conditions and are then matched with a dataset of ground truth images also taken under normal lighting conditions.

Highlights

  • Fontal lighting conditions are an essential requirement for most face-related applications, such as face identification, recognition, and animation

  • The tone mapping method proposed in [14] integrates with a principal component analysis (PCA) clustering algorithm to consider the semantic lighting factors of image transformation tasks; the high dynamic range image generation method in [12] applies a convolutional neural network (CNN) to map input images to target outputs

  • We proposed a lighting- and personal characteristic-aware Markov Random field (MRF) model for the lighting transformation task based on a limited size of training dataset

Read more

Summary

Introduction

Fontal lighting conditions are an essential requirement for most face-related applications, such as face identification, recognition, and animation. Developing an automatic facial image relighting system capable of reproducing the unique light and shadow style of a particular lighting condition is challenging. Histogram matching approaches (such as methods proposed in [1, 17, 25]) are typical image relighting methods, where the objective is to convert the statistical intensity distribution of the input image to that of the (target) reference image. Such methods work well in natural photography, but do not produce image with a light effect depending on specific image features or structure. Lighting factors tend to be more complex, for example, the image intensity distribution of different images lighting varies under the same light source

Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.