Abstract
Perceiving its environment in 3D is an important ability for a modern robot. Today, this is often done using LiDARs which come with a strongly limited field of view (FOV), however. To extend their FOV, the sensors are mounted on driving vehicles in several different ways. This allows 3D perception even with 2D LiDARs if a corresponding localization system or technique is available. Another popular way to gain most information of the scanners is to mount them on a rotating carrier platform. In this way, their measurements in different directions can be collected and transformed into a common frame, in order to achieve a nearly full spherical perception. However, this is only possible if the kinetic chains of the platforms are known exactly, that is, if the LiDAR pose w.r.t. to its rotation center is well known. The manual measurement of these chains is often very cumbersome or sometimes even impossible to do with the necessary precision. Our paper proposes a method to calibrate the extrinsic LiDAR parameters by decoupling the rotation from the full six degrees of freedom transform and optimizing both separately. Thus, one error measure for the orientation and one for the translation with known orientation are minimized subsequently with a combination of a consecutive grid search and a gradient descent. Both error measures are inferred from spherical calibration targets. Our experiments with the method suggest that the main influences on the calibration results come from the the distance to the calibration targets, the accuracy of their center point estimation and the search grid resolution. However, our proposed calibration method improves the extrinsic parameters even with unfavourable configurations and from inaccurate initial pose guesses.
Highlights
In robotics, the perception of 3D information about the agent’s surroundings is important for many tasks, like planning robot arm movements, object and obstacle detection or 3D model reconstruction.the increasing amount of available laser detection and ranging sensors (LiDAR), sometimes referred to as laser range finder (LRF), together with falling device prices, caused an increasing usage of those for 3D perception
LiDARs are mounted on a rotating carrier, to enable the sensors to extend the field of view
The LiDAR is mounted in a way that its frustum is either tangential to the trajectory of the scanner rotation, as indicated by the coloured triangles in Figure 2 or the frustum is intersecting this trajectory, for example, when the scanner in Figure 2 is rotated by 90◦
Summary
The perception of 3D information about the agent’s surroundings is important for many tasks, like planning robot arm movements, object and obstacle detection or 3D model reconstruction.the increasing amount of available laser detection and ranging sensors (LiDAR), sometimes referred to as laser range finder (LRF), together with falling device prices, caused an increasing usage of those for 3D perception. Various sensor types and specifications [1,2,3] exist, all devices come with a strongly limited field of view and often with a low sampling resolution compared to other vision sensors. To compensate these disadvantages, many developments engaging different strategies of mounting them on various kinds of vehicles and robots are done. All of them try to gain as much information from the installed sensors as possible, by extending their field of view This is solved through a specific mounting pose [4,5] or rotating the LiDARs on a sensor carrier [6,7].
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.