Abstract

Machine Vision and its common usage in automation processes as well as autonomous systems make cameras essential parts of many modern robotic applications. With cameras always comes the challenge of calibration, as e.g. lenses and sensor distort the image. The use of several cameras can be motivated by either redundancy, a wider field of view or the usage of different camera technologies to improve vision capabilities. Still those cameras need an extrinsic calibration so adjacent points in images of different cameras can be linked. This paper describes a method to calibrate a multimodal camera setup with simple features that can be detected by all types of cameras used in the setup. The proposed method leads to an easily applicable routine that, beside being used in laboratory environments, also can be used in field and addresses both intrinsic and extrinsic calibration for a trinocular setup with a thermal, an event-based and a combined color/depth-camera, all roughly facing in the same direction. Evaluation is done with a small combined mount for the cameras that can be used as a ground truth for extrinsic calibration, while the proposed features can be used to determine quality of the intrinsic calibration with common methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call