Abstract

Large-scale unstructured point cloud scenes can be quickly visualized without prior reconstruction by utilizing levels-of-detail structures to load an appropriate subset from out-of-core storage for rendering the current view. However, as soon as we need structures within the point cloud, e.g., for interactions between objects, the construction of state-of-the-art data structures requires O(NlogN) time for N points, which is not feasible in real time for millions of points that are possibly updated in each frame. Therefore, we propose to use a surface representation structure which trades off the (here negligible) disadvantage of single-frame use for both output-dominated and near-linear construction time in practice, exploiting the inherent 2D property of sampled surfaces in 3D. This structure tightly encompasses the assumed surface of unstructured points in a set of bounding depth intervals for each cell of a discrete 2D grid. The sorted depth samples in the structure permit fast surface queries, and on top of that an occlusion graph for the scene comes almost for free. This graph enables novel real-time user operations such as revealing partially occluded objects, or scrolling through layers of occluding objects, e.g., walls in a building. As an example application we showcase a 3D scene exploration framework that enables fast, more sophisticated interactions with point clouds rendered in real time.

Highlights

  • A variety of current sensors allows acquiring large scenes as dense point clouds, and state-of-the-art methods can render such huge 3D data in real time

  • V irtual x-rays turn objects transparent or semi-transparent in order to reveal occluded items [11,25]. All these techniques assume that the objects have a priori well-defined surfaces, and that they are arranged in a tree-like structure, in order to find occlusions and order along depth efficiently via ray casting. This information is extracted from the discrete depth structure (DDS), which is built in real time and approximates the surfaces of unstructured point clouds

  • The DDS is based on similar definitions as the thickened layered depth images (TLDI) [29], and stores the same information, but it is more generically applicable, and we present a much more efficient method to construct it by combining its multiple passes on input data into a single one

Read more

Summary

Introduction

A variety of current sensors allows acquiring large scenes as dense point clouds, and state-of-the-art methods can render such huge 3D data in real time. As occlusions are view-dependent, occlusion relations between the scene objects can be extracted from the sorted view-aligned depth intervals of the DDS This is similar to casting rays perpendicular to (and at the centers of the cells of) the 2D grid on which the DDS is built, where the discretization guarantees that all parts of the surface relevant to the current view is considered. We provide an exploration tool, based on occludee revealing, to aid users with quickly understanding a scene and the spatial relations between its objects This tool is coupled with a rendering structure, Potree [34], to form a complete framework for both rendering and exploring huge point clouds. – The DDS: A tight surface-bounding structure for point clouds with constant time queries which generalizes the TLDI [29] and constructs much faster, in the same pass for all depth layers.

Related work
Discrete depth structure
Discrete surface bound
DDS: construction and queries
Occlusion-based scene exploration
Reconstruction
Per-object reconstruction
Occlusion detection with the DDS
Exploration tools
Evaluation
Conclusion and potential applications
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.