Abstract

Mixing 3D computer-generated images with real-scene images seamlessly in augmented reality has many desirable and wide areas of applications such as entertainment, cinematography, design visualization and medical trainings. The challenging task is to make virtual objects blend harmoniously into the real scene and appear as if they are like real. Apart from constructing detailed geometric 3D model representation and obtaining accurate surface properties for virtual objects, adopting real scene lighting information to render virtual objects is another important factor to achieve photorealistic rendering. Such a factor not only improves visual complexity of virtual objects, but also determines the consistency of illumination between the virtual objects and the surrounding real objects in the scene. Conventional rendering techniques such as ray tracing, and radiosity require intensive computation and data preparation to solve the lighting transport equation. Hence, they are less practical for rendering virtual objects in augmented reality, which demands a real-time performance. This work explores an image-based and hardware-based approach to improve photorealism for rendering synthetic objects in augmented reality. It uses a recent technique of image-based lighting, environment illumination maps, and a simple yet practical multi-pass rendering framework for augmented reality.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.