Abstract

Vision-based three-dimensional (3D) shape measurement techniques have been widely applied over the past decades in numerous applications due to their characteristics of high precision, high efficiency and non-contact. Recently, great advances in computing devices and artificial intelligence have facilitated the development of vision-based measurement technology. This paper mainly focuses on state-of-the-art vision-based methods that can perform 3D shape measurement with high precision and high resolution. Specifically, the basic principles and typical techniques of triangulation-based measurement methods as well as their advantages and limitations are elaborated, and the learning-based techniques used for 3D vision measurement are enumerated. Finally, the advances of, and the prospects for, further improvement of vision-based 3D shape measurement techniques are proposed.

Highlights

  • The technical exploration of extracting three-dimensional (3D) information from twodimensional (2D) images began with the research on the image processing of polyhedral block world by L

  • Monocular vision-based measurements can be classified into two major categories: the conventional methods including shape from focus (SFF) [10], structure from motion (SFM) [11], simultaneous localization and mapping (SLAM) [12], etc.; and the learning-based methods, [13] which use a large number of sample data to train convolutional neural network (CNN) and obtain the depth information of the scene through network model

  • These passive methods are often limited by the texture of scenes and have lower accuracy compared with the active methods, represented by timeof-flight (ToF) [14], triangulation-based laser scanning [15] and structured light (SL) [16], phase measuring deflectometry (PMD) [17], differential interference contrast [18], etc

Read more

Summary

Introduction

The technical exploration of extracting three-dimensional (3D) information from twodimensional (2D) images began with the research on the image processing of polyhedral block world by L. Monocular vision-based measurements can be classified into two major categories: the conventional methods including shape from focus (SFF) [10], structure from motion (SFM) [11], simultaneous localization and mapping (SLAM) [12], etc.; and the learning-based methods, [13] which use a large number of sample data to train convolutional neural network (CNN) and obtain the depth information of the scene through network model These passive methods are often limited by the texture of scenes and have lower accuracy compared with the active methods, represented by timeof-flight (ToF) [14], triangulation-based laser scanning [15] and structured light (SL) [16], phase measuring deflectometry (PMD) [17], differential interference contrast [18], etc. Appropriate camera calibration methods should be used according to the specific application

Epipolar Geometry
Laser Triangulation
Structured Light System Model
Three-Dimensional Laser Scanning Technique
Structured Light Technique
Fringe Projection
Comparison and Analysis
Uncertainty of Vision-Based Measurement
Findings
Challenges and Prospects
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call