Abstract

The accuracy of camera calibration is of great importance to vision measurements. Target-based calibration methods should cover the whole field of view (FOV), which leads to complex operation in large FOVs and rely on personal experiences. To overcome the difficulty in accurate calibration, a calibration method based on reinforcement learning is proposed. First, a Markov decision process (MDP) model of the calibration procedure is established. Then, the reward function is designed, combining the requirement of calibration accuracy and the state-space constraint. Finally, the optimized target locations and poses are obtained by continuous interaction with the calibration environment based on Q-learning, which plays a key guiding role in camera calibration. Simulation experiments and real experiments are performed, which indicate that the proposed method effectively improves the success rate of large FOV camera calibration, and solves the problems of relying on personal experiences, low calibration accuracy, and poor stability caused by target placement in the camera calibration process.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.