Abstract
Abstract. In applying optical methods for automated 3D indoor modelling, the 3D reconstruction of objects and surfaces is very sensitive to both lighting conditions and the observed surface properties, which ultimately compromise the utility of the acquired 3D point clouds. This paper presents a robust scene reconstruction method which is predicated upon the observation that most objects contain only a small set of primitives. The approach combines sparse approximation techniques from the compressive sensing domain with surface rendering approaches from computer graphics. The amalgamation of these techniques allows a scene to be represented by a small set of geometric primitives and to generate perceptually appealing results. The resulting scene surface models are defined as implicit functions and may be processed using conventional rendering algorithms such as marching cubes, to deliver polygonal models of arbitrary resolution. It will also be shown that 3D point clouds with outliers, strong noise and varying sampling density can be reliably processed without manual intervention.
Highlights
While many low cost 3D sensors such as stereo cameras and Kinect have recently become available, they all have difficulties in providing reliable object and surface reconstruction under either varying lighting conditions or during motion as for example, in mobile systems
The scenes contain many holes since the distribution of the detected elements is very sparse in comparison with points employed in computer graphics methods
This section reviews some of the common methods for surface reconstruction and the established class of surface modeling algorithms which directly create triangle-based meshes between 3D points, known as Delaunay triangulation (Su and Drysdale, 1995)
Summary
While many low cost 3D sensors such as stereo cameras and Kinect have recently become available, they all have difficulties in providing reliable object and surface reconstruction under either varying lighting conditions or during motion as for example, in mobile systems. Existing approaches including (Ohtake et al, 2003, Kazhdan et al, 2013) and (Alexa et al, 2001) from the computer graphics domain, are able to process data which has either low noise or contains a small number of outliers This is because they have been designed to deliver visually appealing renderings, so the semantic structure of the scene is not considered. In this paper a connection between sparse primitive-based scene modelling and dense rendering is established with the introduction of a new approach to extract planar primitives from the point clouds by describing surfaces by implicit functions. This allows dense rendering and the filling in of holes caused by missing samples.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have