Abstract

AbstractTo accelerate digital 3D environments creation, we propose a workflow utilizing neural network systems to create 3D indoor room layouts in the Unity game engine from 2D equirectangular RGB 360 panorama images. Our approach is inspired by HorizonNet, which generates textured room layouts in point clouds using Recurrent Neural Network (RNN). However, it is not desirable in VR since data points can be visible at close ranges, and thus, break user immersion. Alternatively, we used 3D meshes that are connected with small triangular faces, which stitch together with no gaps in between, simulating realistic solid surfaces. We succeeded in converting room layout representations from point cloud to 3D mesh, by extracting rooms’ metadata predicted by HorizonNet, and dynamically generating textured custom mesh in Unity. Mesh layouts can be directly applied into Unity VR applications. Users can take 360 images on their mobile phones and visualize room layouts in VR through our system. As our evaluations suggest, mesh layout representation improves frame rates and memory usage and does not affect the layout accuracy of the original approach, providing satisfactory room layout for VR development.KeywordVirtual realityRoom layout estimationArtificial neural networkPoint cloudMeshUnity

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.