Abstract

This paper presents a framework of incremental 3D cuboid modeling by using the mapping results of an RGB-D camera based simultaneous localization and mapping (SLAM) system. This framework is useful in accurately creating cuboid CAD models from a point cloud in an online manner. While performing the RGB-D SLAM, planes are incrementally reconstructed from a point cloud in each frame to create a plane map. Then, cuboids are detected in the plane map by analyzing the positional relationships between the planes, such as orthogonality, convexity, and proximity. Finally, the position, pose, and size of a cuboid are determined by computing the intersection of three perpendicular planes. To suppress the false detection of the cuboids, the cuboid shapes are incrementally updated with sequential measurements to check the uncertainty of the cuboids. In addition, the drift error of the SLAM is compensated by the registration of the cuboids. As an application of our framework, an augmented reality-based interactive cuboid modeling system was developed. In the evaluation at cluttered environments, the precision and recall of the cuboid detection were investigated, compared with a batch-based cuboid detection method, so that the advantages of our proposed method were clarified.

Highlights

  • Owing to the advance of visual odometry and simultaneous localization and mapping (SLAM), the automated control of cars, drones, and robots has been achieved by generating a point cloud-basedA cuboid, which is the target shape for the CAD conversion in this paper, is considered as an informative shape representation because there exist many types of cuboids in our environment.For instance, delivery boxes used in logistics and product packages in markets are practical examples.Sensors 2019, 19, 178; doi:10.3390/s19010178 www.mdpi.com/journal/sensorsTo achieve automated robot manipulation in such environments, techniques to recognize cuboids under cluttered conditions are often required

  • We propose a framework of incremental cuboid modeling by using the mapping results of an RGB-D SLAM system

  • To evaluate the performance of our proposed framework, we first prepared RGB-D image sequences capturing multiple boxes as our dataset because only a dataset for single views was developed in the literature [5] and there is no dataset with RGB-D sequences containing cuboid ground truth annotations

Read more

Summary

Introduction

Owing to the advance of visual odometry and simultaneous localization and mapping (SLAM), the automated control of cars, drones, and robots has been achieved by generating a point cloud-based. The approaches for generating cuboid-based building models have been proposed according to the density of a point cloud with Light Detection and Ranging (LIDAR) [15,16,17]. These methods are based on offline batch processing such that recognition is only performed with a single observation. Since the parameters can be incrementally updated based on Reference [21], the positional relationships of the planes are analyzed at every frame This means that both newly detected planes and previously detected cuboids are analyzed so that the false detection of the cuboids can be suppressed with this sequential processing.

Related Work
Overview
Cuboid Detection
Second Plane Selection
Third Plane Selection
Cuboid Parameter Estimation
Cuboid Mapping
Cuboid Update
Cuboid Check with Cuboid Map
Drift Compensation
Cuboid Matching
Camera Pose Refinement
Local Plane Map Refinement
Interactive Cuboid Modeling
Evaluation
Cuboid-Shape Estimation
Cuboid Detection in a Cluttered Environment
Effectiveness of Drift Compensation
Computational Cost
Limitation
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.