Abstract

We introduce ShadowGAN, a generative adversarial network (GAN) for synthesizing shadows for virtual objects inserted in images. Given a target image containing several existing objects with shadows, and an input source object with a specified insertion position, the network generates a realistic shadow for the source object. The shadow is synthesized by a generator; using the proposed local adversarial and global adversarial discriminators, the synthetic shadow’s appearance is locally realistic in shape, and globally consistent with other objects’ shadows in terms of shadow direction and area. To overcome the lack of training data, we produced training samples based on public 3D models and rendering technology. Experimental results from a user study show that the synthetic shadowed results look natural and authentic.

Highlights

  • Inserting virtual objects into scenes has a wide range of applications in visual media, from movies, advertisements, and entertainment to virtual reality

  • We address the shadow synthesis problem for virtual objects inserted in an image

  • Our proposed ShadowGAN is trained on synthetic data, where static scene images are rendered using 3D models indexed by ShapeNet [34]

Read more

Summary

Introduction

Inserting virtual objects into scenes has a wide range of applications in visual media, from movies, advertisements, and entertainment to virtual reality. Consistency of shadows between the original scene and the inserted object contributes greatly to the naturalness of the results. Even an experienced editor spends much effort to produce convincing results using commercial editing software such as Adobe Photoshop. The difficulties in this process stem from the lack of accurate estimates of illumination and scene geometry. Other methods [1,2,3,4] synthesize shadows with approximately estimated illumination and reconstructed scene geometry. Such computations either require user interaction or precise tools, and yet are time-consuming

Objectives
Methods
Results
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.