Monocular Visual Measurement System Uncertainty Analysis and One-Step End-End Estimation Upgrade.
Monocular visual measurement and vision-guided robotics technology find extensive application in modern automated manufacturing, particularly in aerospace assembly. However, during assembly pose measurement and guidance, the propagation and accumulation of multi-source errors-including those from visual measurement, hand-eye calibration, and robot calibration-impact final assembly accuracy. To address this issue, this study first proposes an uncertainty analysis method for monocular visual measurement systems in assembly pose, encompassing the determination of uncertainty propagation paths and input uncertainty values. Building on this foundation, the system's uncertainty is analyzed. Inspired by the uncertainty analysis results, this study further proposes a direct one-step solution to a series of problems in robot calibration and hand-eye calibration using a nonlinear mapping estimation method. Through experiments and discussion, a high-performance, one-step, end-to-end pose estimation convolutional neural network (OECNN) is constructed. The OECNN achieves direct mapping from the pose variation of the target object to the drive volume variation of the positioner. The uncertainty analysis conducted in this study yields a series of conclusions that are significant for further enhancing the precision of assembly pose estimation. The proposed uncertainty analysis methodology may also serve as a reference for uncertainty analysis in complex systems. Experimental validation demonstrates that the proposed one-step end-to-end pose estimation method exhibits high accuracy. It can be applied to automated assembly tasks involving various vision-guided robots, including those with typical configurations, and it is particularly suitable for high-precision assembly scenarios, such as aircraft assembly.
- Conference Article
7
- 10.33012/2020.17588
- Oct 28, 2020
A system is presented for multi-antenna Carrier Phase Differential GNSS (CDGNSS)-based pose (position and orientation) estimation aided by monocular visual measurements and a smartphone-grade inertial sensor. The system is designed for micro aerial vehicles, but can be applied generally for low-cost, lightweight, high-accuracy, geo-referenced pose estimation. Visual and inertial measurements enable robust operation despite GNSS degradation by constraining uncertainty in the dynamics propagation, improving fixed-integer CDGNSS availability and reliability in areas with limited sky visibility. No prior work has demonstrated an increased CDGNSS integer fixing rate when incorporating visual measurements with smartphone-grade inertial sensing. A central pose estimation filter receives measurements from separate CDGNSS position and attitude estimators, visual feature measurements based on the ROVIO measurement model, and inertial measurements. The filter's pose estimates are fed back as a prior for CDGNSS integer fixing. A performance analysis under both simulated and real-world GNSS degradation shows that visual measurements greatly increase the availability and accuracy of low-cost inertial-aided CDGNSS pose estimation.
- Research Article
13
- 10.1371/journal.pone.0273261
- Oct 19, 2022
- PLoS ONE
Hand-eye calibration is an important step in controlling a vision-guided robot in applications like part assembly, bin picking and inspection operations etc. Many methods for estimating hand-eye transformations have been proposed in literature with varying degrees of complexity and accuracy. However, the success of a vision-guided application is highly impacted by the accuracy the hand-eye calibration of the vision system with the robot. The level of this accuracy depends on several factors such as rotation and translation noise, rotation and translation motion range that must be considered during calibration. Previous studies and benchmarking of the proposed algorithms have largely been focused on the combined effect of rotation and translation noise. This study provides insight on the impact of rotation and translation noise acting in isolation on the hand-eye calibration accuracy. This deviates from the most common method of assessing hand-eye calibration accuracy based on pose noise (combined rotation and translation noise). We also evaluated the impact of the robot motion range used during the hand-eye calibration operation which is rarely considered. We provide quantitative evaluation of our study using six commonly used algorithms from an implementation perspective. We comparatively analyse the performance of these algorithms through simulation case studies and experimental validation using the Universal Robot’s UR5e physical robots. Our results show that these different algorithms perform differently when the noise conditions vary rather than following a general trend. For example, the simultaneous methods are more resistant to rotation noise, whereas the separate methods are better at dealing with translation noise. Additionally, while increasing the robot rotation motion span during calibration enhances the accuracy of the separate methods, it has a negative effect on the simultaneous methods. Conversely, increasing the translation motion range improves the accuracy of simultaneous methods but degrades the accuracy of the separate methods. These findings suggest that those conditions should be considered when benchmarking algorithms or performing a calibration process for enhanced accuracy.
- Conference Article
5
- 10.1109/icarcv.2010.5707268
- Dec 1, 2010
We present a novel method for calibration of a robotic manipulator. The robot kinematic chain and its tool are observed by a hand mounted camera through a mirror. We show the possibility of enabling hand-eye, hand-tool, and kinematic robot calibration without incorporating accurate external references, except the mirror. Using this particularly simple setup, hand-eye calibration becomes independent of the kinematic chain and parameter observability constraints in kinematic calibration become more relaxed, which makes pose planning for robot calibration more convenient.
- Research Article
28
- 10.1108/ir-02-2018-0034
- Oct 8, 2018
- Industrial Robot: An International Journal
Purpose This paper aims to propose a hand–eye calibration method of arc welding robot and laser vision sensor by using semidefinite programming (SDP). Design/methodology/approach The conversion relationship between the pixel coordinate system and laser plane coordinate system is established on the basis of the mathematical model of three-dimensional measurement of laser vision sensor. In addition, the conversion relationship between the arc welding robot coordinate system and the laser vision sensor measurement coordinate system is also established on the basis of the hand–eye calibration model. The ordinary least square (OLS) is used to calculate the rotation matrix, and the SDP is used to identify the direction vectors of the rotation matrix to ensure their orthogonality. Findings The feasibility identification can reduce the calibration error, and ensure the orthogonality of the calibration results. More accurate calibration results can be obtained by combining OLS + SDP. Originality/value A set of advanced calibration methods is systematically established, which includes parameters calibration of laser vision sensor and hand–eye calibration of robots and sensors. For the hand–eye calibration, the physics feasibility problem of rotating matrix is creatively put forward, and is solved through SDP algorithm. High-precision calibration results provide a good foundation for future research on seam tracking.
- Research Article
13
- 10.3390/electronics11030354
- Jan 24, 2022
- Electronics
In order to improve industrial production efficiency, a hand–eye system based on 3D vision is proposed and the proposed system is applied to the assembly task of workpieces. First, a hand–eye calibration optimization algorithm based on data filtering is proposed in this paper. This method ensures the accuracy required for hand–eye calibration by filtering out part of the improper data. Furthermore, the improved U-net is adopted for image segmentation and SAC-IA coarse registration ICP fine registration method is adopted for point cloud registration. This method ensures that the 6D pose estimation of the object is more accurate. Through the hand–eye calibration method based on data filtering, the average error of hand–eye calibration is reduced by 0.42 mm to 0.08 mm. Compared with other models, the improved U-net proposed in this paper has higher accuracy for depth image segmentation, and the Acc coefficient and Dice coefficient achieve 0.961 and 0.876, respectively. The average translation error, average rotation error and average time-consuming of the object recognition and pose estimation methods proposed in this paper are 1.19 mm, 1.27°, and 7.5 s, respectively. The experimental results show that the proposed system in this paper can complete high-precision assembly tasks.
- Conference Article
6
- 10.1109/cyber46603.2019.9066550
- Jul 1, 2019
It is a great challenge to grasp 3D objects in unstructured environment. This task is closely related with object recognition, pose estimation, hand-eye calibration and grasp strategy planning. This paper focuses on the 6-DoF pose estimation and hand-eye calibration problems. Based on the point cloud provided by the RGB-D sensor, Viewpoint Feature Histogram (VFH) descriptor is used to localize the object by comparing the scene and model library. Instead of using a pan-tilt platform to build the template library, an industrial robot with in-hand camera is programmed to collect point clouds from different view angles. Distances between scene point cloud and the model point clouds are evaluated to find a group of candidate poses. The poses are further refined by aligning those point cloud pairs using Iterative Closest Point (ICP) algorithm. Although standard VFH descriptor is invariant to scale, it is sensitive to viewpoint variance, which may lead to irrational results. In order to improve the robustness, effect of the translational offset and number of pose candidates are evaluated. The hand-eye calibration process is formulated into an AX=ZB problem and solved by using quaternion rotation and least squares method. A series of experiments are performed with a RGB-D sensor and an industrial robot. The results verify that the method is effective to estimate the object poses. Considering the accuracy of the used sensor, it is proved that the proposed method has acceptable robustness and accuracy.
- Research Article
6
- 10.1063/5.0147783
- Jun 1, 2023
- Review of Scientific Instruments
The vision system is a crucial technology for realizing the automation and intelligence of industrial robots, and the accuracy of hand-eye calibration is crucial in determining the relationship between the camera and robot end. Parallel robots are widely used in automated assembly due to their high positioning accuracy and large carrying capacity, but traditional hand-eye calibration methods may not be applicable due to their limited motion range and resulting accuracy problems. To address this issue, we propose using a pose, nonlinear mapping estimation method to solve the hand-eye calibration problem and have constructed a 1-D pose estimation convolutional neural network (PECNN) with excellent performance, through experiments and discussions. The PECNN achieves an end-to-end mapping of the variation of the target object pose to the variation of the robot end pose. Our experiments have shown that the proposed hand-eye calibration method has high accuracy and can be applied to the automated assembly tasks of vision-guided parallel robots. Moreover, the method is also applicable to most parallel robots and tandem robots.
- Research Article
2214
- 10.1016/s0951-8320(03)00058-9
- May 14, 2003
- Reliability Engineering & System Safety
Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems
- Single Report
67
- 10.2172/806696
- Nov 1, 2002
The following techniques for uncertainty and sensitivity analysis are briefly summarized: Monte Carlo analysis, differential analysis, response surface methodology, Fourier amplitude sensitivity test, Sobol’ variance decomposition, and fast probability integration. Desirable features of Monte Carlo analysis in conjunction with Latin hypercube sampling are described in discussions of the following topics: (i) properties of random, stratified and Latin hypercube sampling, (ii) comparisons of random and Latin hypercube sampling, (iii) operations involving Latin hypercube sampling (i.e. correlation control, reweighting of samples to incorporate changed distributions, replicated sampling to test reproducibility of results), (iv) uncertainty analysis (i.e. cumulative distribution functions, complementary cumulative distribution functions, box plots), (v) sensitivity analysis (i.e. scatterplots, regression analysis, correlation analysis, rank transformations, searches for nonrandom patterns), and (vi) analyses involving stochastic (i.e. aleatory) and subjective (i.e. epistemic) uncertainty. Published by Elsevier Science Ltd.
- Conference Article
8
- 10.1109/etfa52439.2022.9921738
- Sep 6, 2022
This paper presents an accurate and precise hand-eye calibration technique based on minimization of the reprojection error. Unlike traditional hand-eye calibration, the proposed method does not require an explicit estimate of the camera pose for each input image because it does not rely on mathematical description and problem formulation commonly used in standard hand-eye calibration algorithms. The proposed method is based on a nonlinear optimization problem, so that the estimation problem can be solved efficiently and robustly, and can be easily extended to different camera-robot setups (e.g., eye-on-base or eye-in-hand). An extensive evaluation based on simulated and real experiments has been performed, proving its good estimation accuracy in terms of reprojection error. The experimental results with real robots show that the proposed method is applicable to relevant industrial contexts and improves the quality and precision of the camera-robot transformation estimation with respect to state-of-the-art approaches.
- Research Article
8
- 10.1109/access.2021.3136850
- Jan 1, 2022
- IEEE Access
The explosive ordnance disposal (EOD) robots work in a special environment, which requires dual robotic arms to work together for removing the bomb. Therefore, machine vision is very important to locating bombs, especially the accuracy of the hand-eye calibration technology. A basic problem encountered with the collaborative work of dual robotic arms is to solve the unknown homogeneous transformation matrix, including: hand-eye of robotic arm 1, base-base, and camera-end effector’s robotic arm 2. In this article, the hand-eye calibration problem of the dual robotic arm system is expressed as two matrix equations. A new method of simultaneously solving the unknowns in the matrix equation is proposed. This method consists of a closed-form method based on the Kronecker product and an iterative method that transforms a nonlinear problem with a convex function optimization problem. The closed-form method is used to quickly obtain the initial value of the iteration method to improve the efficiency and accuracy of the iteration. In addition, we propose a hand-eye calibration method based on the re-projection error of the RGB-D camera. In order to prove the feasibility and superiority in the proposed iterative method, we conducted simulation and actual experiments and compared them with the other two calibration methods. The comparison results verify the superiority in the proposed method in terms of accuracy.
- Research Article
2
- 10.1364/oe.455188
- Apr 14, 2022
- Optics Express
For robot-assisted assembly of complex optical systems, the alignment is facilitated by an accurate pose estimation of its components. However, wavefront-based pose estimation is typically ill-conditioned due to the inherent geometry of conventional industrially manufactured optical components. Therefore, we propose a novel approach in this paper to increase wavefront-based pose estimation accuracy via the design of freeform optics. For this purpose, an optimization problem is derived that parameterizes the component's surfaces by a predetermined freeform surface model. To show the efficacy of our approach, we provide simulation results to compare the pose estimation accuracy for a variety of optical designs. As an application example for the resulting improved pose estimation, a hand-eye calibration of a wavefront sensor is performed. This calibration originates from the field of robotics and represents the identification of a sensor coordinate system with respect to a global reference frame. For quantitative evaluation, the calibrating results are first presented with the aid of simulation data. Finally, the practical feasibility is demonstrated using a conventional industrial robot and additively manufactured freeform lenses.
- Research Article
32
- 10.1177/0278364918778353
- Jun 25, 2018
- The International Journal of Robotics Research
Pose estimation is central to several robotics applications such as registration, hand–eye calibration, and simultaneous localization and mapping (SLAM). Online pose estimation methods typically use Gaussian distributions to describe the uncertainty in the pose parameters. Such a description can be inadequate when using parameters such as unit quaternions that are not unimodally distributed. A Bingham distribution can effectively model the uncertainty in unit quaternions, as it has antipodal symmetry, and is defined on a unit hypersphere. A combination of Gaussian and Bingham distributions is used to develop a truly linear filter that accurately estimates the distribution of the pose parameters. The linear filter, however, comes at the cost of state-dependent measurement uncertainty. Using results from stochastic theory, we show that the state-dependent measurement uncertainty can be evaluated exactly. To show the broad applicability of this approach, we derive linear measurement models for applications that use position, surface-normal, and pose measurements. Experiments assert that this approach is robust to initial estimation errors as well as sensor noise. Compared with state-of-the-art methods, our approach takes fewer iterations to converge onto the correct pose estimate. The efficacy of the formulation is illustrated with a number of examples on standard datasets as well as real-world experiments.
- Research Article
8
- 10.1108/ijicc-06-2017-0067
- Jun 11, 2018
- International Journal of Intelligent Computing and Cybernetics
PurposeThe purpose of this paper is to develop a monocular visual measurement system for autonomous aerial refueling (AAR) for unmanned aerial vehicle, which can process images from an infrared camera to estimate the pose of the drogue in the tanker with high accuracy and real-time performance.Design/methodology/approachMethods and techniques for marker detection, feature matching and pose estimation have been designed and implemented in the visual measurement system.FindingsThe simple blob detection (SBD) method is adopted, which outperforms the Laplacian of Gaussian method. And a novel noise-elimination algorithm is proposed for excluding the noise points. Besides, a novel feature matching algorithm based on perspective transformation is proposed. Comparative experimental results indicated the rapidity and effectiveness of the proposed methods.Practical implicationsThe visual measurement system developed in this paper can be applied to estimate the pose of the drogue with a fast speed and high accuracy and it is a feasible measurement strategy which will considerably increase the autonomy and reliability for AAR.Originality/valueThe SBD method is used to detect the features and a novel noise-elimination algorithm is proposed. Besides, a novel feature matching algorithm based on perspective transformation is proposed which is robust and accurate.
- Book Chapter
15
- 10.1007/11492429_78
- Jan 1, 2005
Online implementation of robotic hand-eye calibration consists in determining the relative pose between the robot gripper/end-effector and the sensors mounted on it, as the robot makes unplanned movement. With noisy measurements, inevitable in real applications, the calibration is sensitive to small rotations. Moreover, degenerate cases such as pure translations are of no effect in hand-eye calibration. This paper proposes an algorithm of motion selection for hand-eye calibration. Using this method, not only can we avoid the degenerate cases, but also the small rotations to decrease the calibration error. Thus, the procedure lends itself to an online implementation of hand-eye calibration, where degenerate cases and small rotations frequently occur in the sampled motions. Simulation and real experiments validate our method.KeywordsDegenerate CaseGolden RuleCalibration ErrorSmall RotationNoisy MeasurementThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.