Abstract

Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the BLAINDER add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the BLAINDER add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.

Highlights

  • Depth sensors have become ubiquitous in many application areas, e.g., robotics, driver assistance systems, geo modeling, and 3D scanning using smartphones

  • To avoid the time-consuming manual labeling process of 3D point clouds and to provide a tool for rapid generation of ML training data across many domains, we have developed the BLAINDER add-on, a programmatic Artificial Intelligence (AI) extension of the open-source software Blender

  • The core purposes of the add-on BLAINDER is the automatic semantic segmentation of objects and object components in point clouds generated from virtual depth sensors

Read more

Summary

Introduction

Depth sensors have become ubiquitous in many application areas, e.g., robotics, driver assistance systems, geo modeling, and 3D scanning using smartphones The output of such depth sensors is often used to build a 3D point-cloud representation of the environment. Artificial Intelligence (AI) approaches, often based on ML techniques, can be used to understand the structure of the environment by providing a semantic segmentation of the 3D point-cloud, i.e., the detection and classification of the various objects in the scene. To train such classifiers, large amounts of training data are required that provide labeled examples of correct classifications. To avoid the time-consuming manual labeling process of 3D point clouds and to provide a tool for rapid generation of ML training data across many domains, we have developed the BLAINDER add-on, a programmatic AI extension of the open-source software Blender

Related Work
Structure of This Article
Fundamentals of Depth-Sensing
Spreading of Waterborne Sound
Interaction of Light and Sound with Matter
Reflection
Refraction
Measurement Errors Induced by Reflection and Refraction
Random Measurement Error
Modules of the Implementation
Semi-Static Scenes
Animations and Physics Simulation
Predefined Sensors
Adding Noise
Signal Processing and Physical Effects
Sound Profile in Water
Modeling Surface Properties with Materials
Semantic Labeling
Data Export
Results
Semantically Labeled 2D Images
Animations
Validation of Measurements
Runtime Performance
Number of Measurement Points
Number of Objects
Weather Simulation
Comparison to Similar Applications
Conclusions and Outlook
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call