Abstract

During the past few decades, an explosive development of multiple camera systems has occurred. For example, a multiple camera system can be used by an advanced driver-assistance system. For cooperative tasks among robots, a multi-camera rig can be used to increase the localization accuracy and robustness. In the logistics industry, a cargo drone mounted with a multi-camera system obtains a panorama view. In these or other high-demanding tasks that heavily depend on multi-camera systems, accurate extrinsic calibration of cameras is an absolute prerequisite for precise visual localization. In this dissertation, a weighted optimization method and a data selection strategy for extrinsic calibration are proposed that relieve the inherent imbalance between pose estimates existing in Liu’s setup. Besides, two new extrinsic calibration methods are proposed to improve the extrinsic calibration accuracy further. Other contributions of the thesis are two cooperative localization methods MOMA and S-MOMA, which can be applied to a robot group. These methods aim at overcoming the localization challenges in indoor environments where repetitive or lack of features are usually the case. The weighted optimization method introduces a quality measure for all the entries of camera-to-marker pose estimates based on the projection size of the known planar calibration patterns on the image. The data selection strategy provides valuable suggestions on the selection of measurements leading to a better coverage in pose space used for the calibration procedure. By introducing a highly accurate tracking system, the first proposed calibration method disconnects the calibration objects, which are rigidly linked in Liu’s setup. With the aid of the tracking system, the method improves calibration accuracy further. The second calibration method uses active calibration patterns realized with two electronic displays. By regulating the fiducial patterns displayed on the monitors, the approach can actively perceive the best possible measurements for the calibration estimation. The configuration of the dynamic virtual pattern aims at maximizing the underlying sensitivity of the objective function, which is based on the sum of reprojection errors, with regard to the relative pose between the camera and the fiducial pattern. State-of-the-art calibration methods, together with different configurations, are conducted and compared in simulation as well as in real experiments validating that both the optimization method and the twonew calibration methods improve the calibration results in terms of accuracy and robustness. In the second part of the dissertation, two novel, purely vision-based cooperative localization approaches MOMA and S-MOMA for a multi-robot system are introduced. MOMA realizes visual odometry via accurate MObile MArker-based positioning. The movement pattern of the robots mimics the movement of a caterpillar. The introduced fiducial marker board, which is mounted on one of the robots, serves as a mobile landmark, based on which the relative pose between the robots is recovered. The absolute positioning of each robot is deduced from the concatenation of the relative poses of previous phases. The second localization algorithm S-MOMA (MOMA with a stereo camera) extends the original MOMA approach. By fusing absolute pose estimates from static environment features with relative pose estimates from known mobile fiducial features, S-MOMA is formulated as an optimization problem combining two different objectives for these two different feature sources based on the same error measure, namely the reprojection error. A comparison between the proposed cooperative localization approaches MOMA, S-MOMA, as well as state-of-the-art localiza- tion algorithms for different configurations, is given validating the improvement in accuracy and robustness against various challenging testing environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call