Abstract

3D sensors such as lidars, stereo cameras, time-of-flight cameras, and the Microsoft Kinect are increasingly found in a wide range of applications, including gaming, personal robotics, and space exploration. In some cases, pattern recognition algorithms for processing depth images can be tested using actual sensors observing real-world objects. In many situations, however, it is common to test new algorithms using computer-generated synthetic images, as such simulations tend to be faster, more flexible, and less expensive than hardware tests. Computer generation of images is especially useful for Monte Carlo-type analyses or for situations where obtaining real sensor data for preliminary testing is difficult (e.g., space applications). We present Glidar, an OpenGL and GL Shading Language-based sensor simulator, capable of imaging nearly any static three-dimensional model. Glidar allows basic object manipulations, or may be connected to a physics simulator for more advanced behaviors. It permits publishing to a tcp socket at high frame-rates or can save to pcd (point cloud data) files. The software is written in C++, and is released under the open source bsd license.

Highlights

  • Sensors that produce three-dimensional (3D) point clouds of an observed scene are widely available and routinely used in a variety of applications, such as self-driving cars [1], precision agricultural [2], personal robotics [3], gaming [4], and space exploration [5,6]

  • Various research groups are left to independently redevelop such a capability, leading to much duplication of effort and lack of a common tool. We address this problem by presenting G LIDAR

  • G LIDAR has been tested on several Ubuntu Linux machines as well as on Mac OS X

Read more

Summary

Introduction

Sensors that produce three-dimensional (3D) point clouds of an observed scene are widely available and routinely used in a variety of applications, such as self-driving cars [1], precision agricultural [2], personal robotics [3], gaming [4], and space exploration [5,6]. Various research groups are left to independently redevelop such a capability, leading to much duplication of effort and lack of a common tool We address this problem by presenting G LIDAR (a portmanteau of OpenGL and LIDAR), which is capable of loading 3D models in a variety of formats, re-orienting them, and saving depth images of the visible surfaces (Figure 1). G LIDAR does not simulate any specific piece of hardware Shown at bottom left is a normal 3D rendering of the original model, the Stanford Bunny, which is available from the Stanford 3D Scanning Repository [21]

Algorithms
The Programmable Rendering Pipeline
Fragment Shader
Strategies for Improving Depth Accuracy
Discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call