Abstract

Abstract. Obtaining accurate 3D descriptions in the thermal infrared (TIR) is a quite challenging task due to the low geometric resolutions of TIR cameras and the low number of strong features in TIR images. Combining the radiometric information of the thermal infrared with 3D data from another sensor is able to overcome most of the limitations in the 3D geometric accuracy. In case of dynamic scenes with moving objects or a moving sensor system, a combination with RGB cameras and profile laserscanners is suitable. As a laserscanner is an active sensor in the visible red or near infrared (NIR) and the thermal infrared camera captures the radiation emitted by the objects in the observed scene, the combination of these two sensors for close range applications are independent from external illumination or textures in the scene. This contribution focusses on the fusion of point clouds from terrestrial laserscanners and RGB cameras with images from thermal infrared mounted together on a robot for indoor 3D reconstruction. The system is geometrical calibrated including the lever arm between the different sensors. As the field of view is different for the sensors, the different sensors record the same scene points not exactly at the same time. Thus, the 3D scene points of the laserscanner and the photogrammetric point cloud from the RGB camera have to be synchronized before point cloud fusion and adding the thermal channel to the 3D points.

Highlights

  • In building inspection geometric and radiometric properties are both important

  • In photogrammetry and computer vision a variety of methods are well developed for 3D reconstruction from ordered (Pollefeys et al, 2008) and unordered (Mayer et al, 2012; Snavely et al, 2008) image sequences

  • These methods are limited to structured surfaces with features that can be detected as Homologous points through the sequences

Read more

Summary

INTRODUCTION

In building inspection geometric and radiometric properties are both important. For geometric accuracy, both point clouds from terrestrial laserscanners (TLS) and photogrammetric stereo reconstruction from RGB images can be used. Coregistration for TOF cameras and RGB images is done calculating the relative orientation in a bundle adjustment with homologous points (Hastedt and Luhmann, 2012) due to the fact, that the radiometric behaviour in near infrared and visible light is almost the same. To fuse these coordinate systems, either the separately generated dense 3D point clouds (Hirschmueller, 2008) of the different sensors are coregistered, or the different sensors have to be put on a common platform with known fixed lever arms between the sensors In terms of this project, exploration of strategies for fusion of 3D point clouds acquired by laser scanner and stereo camera with thermal imagery was done. The point clouds are fused to one point cloud that is afterwards extended by thermal infrared intensities from the thermal infrared images

GEOMETRIC CALIBRATION OF THE SENSOR SYSTEM
EXPERIMENTS
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call