Abstract

Scanning and acquiring a 3D indoor environment suffers from complex occlusions and misalignment errors. The reconstruction obtained from an RGB-D scanner contains holes in geometry and ghosting in texture. These are easily noticeable and cannot be considered as visually compelling VR content without further processing. On the other hand, the well-known Manhattan World priors successfully recreate relatively simple structures. In this article, we would like to push the limit of planar representation in indoor environments. Given an initial 3D reconstruction captured by an RGB-D sensor, we use planes not only to represent the environment geometrically but also to solve an inverse rendering problem considering texture and light. The complex process of shape inference and intrinsic imaging is greatly simplified with the help of detected planes and yet produces a realistic 3D indoor environment. The generated content can adequately represent the spatial arrangements for various AR/VR applications and can be readily composited with virtual objects possessing plausible lighting and texture.

Highlights

  • A realistic 3D environment has extensive possible applications

  • We demonstrate that our representation creates realistic visualizations for various indoor spaces and at the same time decomposes the scene semantically and intrinsically

  • Even though we resort to simple geometry, we can still create the illusion of a realistic environment by rendering the model with a high-resolution texture

Read more

Summary

INTRODUCTION

A realistic 3D environment has extensive possible applications. The immediate usage would be visualizing 3D content for commercial solutions indoors, namely real estate or interior designs of homes, offices, or hotel rooms. Even in a confined space, indoor lighting varies for different locations due to the effects of light fixtures, windows, and shadows of complex objects In previous work, these effects are often ignored, and large-scale lights are modeled with environment maps or directional lighting [20], [21], [22], [23]. We offer three main contributions: converting 3D reconstruction of an indoor environment into an abstract and compact representation of a realistic indoor environment based on planar primitives; decomposing the created content using an inverse rendering pipeline, and making it readily available in the form of physically-based rendering with joint analysis of shape, texture, and lighting; extracting basic semantics of foreground/background segmentation utilizing recovered geometry and texture. We demonstrate that our representation creates realistic visualizations for various indoor spaces and at the same time decomposes the scene semantically (walls, floors, desktops, and objects) and intrinsically (geometry, lighting, and texture). The new form of visualization is compared with other recent representations of 3D indoor environments [16], [24], [25]

RELATED WORK
PROBLEM AND ASSUMPTIONS
GEOMETRY ESTIMATION
Plane Detection and Refinement
Plane Completion
COLOR ESTIMATION
Color-Transfer Optimization
Per-Plane Registration
ÀDuz Duy Dtx
Foreground-Background Optimization
Background
LIGHT PARAMETER ESTIMATION
Setting
Direct Light
Indirect Light
DðLj þ
RESULTS
Implementation
CONCLUSION
Limitations
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.