Abstract
AbstractAiming at the problem of robot positioning, a cooperative positioning method based on deep laser and vision fusion is tested, which overcomes the disadvantages of large environmental impact and large measurement error of vision sensor, and overcomes the disadvantage that the laser sensor can not recognize the target to be measured. According to the principle of RGB-D camera, the motion model of robot is established respectively. The local estimator is composed of RGB-D camera observation model and kinematics model, and the position estimation value and variance are obtained. Another estimator is composed of the observation model and motion model of the laser sensor, and another set of estimates and variances are obtained. The covariance cross fusion method is used to further fuse the two groups of data, which improves the reliability and accuracy of the system. The experimental platform is built by using the turtlebot robot, equipped with odometer, RGB-D camera and laser, and odometer. It is verified that the proposed method is feasible, the error range is small, and meets the practical application.KeywordsRGB-DRobotScene modeling
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.