Abstract
Thermal images produced by the long-wavelength infrared (LWIR) camera are robust and independent from environmental illumination change. They can help the standard visible light camera working under the complicated environmental condition. Breaking through the traditional stereo multi-spectral sensor consisting of a visible-light camera and a LWIR camera, a novel architecture of large field of view (FOV) cooperated infrared and visible spectral sensor for visual odometry is proposed. The novel sensor is equipped with two visible cameras, four infrared cameras covering 120 degrees FOV in horizontal under both bands. Distribution of cameras and related peripheral devices are specifically designed which makes the sensor's volume less than 100 cm (length) × 10 cm (height) × 10 cm (width). The sensor's cameras calibration, distortion correction and measurement principle are elaborated. Feature-based method for visible and multi-windowed optimization-based image alignment for infrared is designed for the visual odometer based on the different imaging mechanism and distribution of cameras in the sensor. The frames and estimated poses management from both bands are proposed. Moreover, all proposed methodologies can be implemented in the sensor's embedded processor. The electrical power consumption is only 12W. Experiments of the sensor's evaluation are performed, experimental results show that large FOV cooperated multi-spectral cameras can efficiently improve the robustness of visual odometry. The real-time performance of the sensor is higher than 10fps with disparity map construction under both bands.
Highlights
Localizing and estimating its ego-motion in 3D space are crucial tasks for autonomous vehicles, mobile robots and Unmanned Aerial Vehicles (UAVs) [1]
EXPERIMENTS AND RESULTS we will show the key performance of our system which are not demonstrated in implementation part, i.e. (1) Infrared part’s visual odometry: we will show the result of our direct method for infrared-based images, frame management of our proposed method; (2) Quantitative evaluation of the sensor’s visual odometry
For low textural long-wavelength infrared (LWIR), grid-based feature extraction can’t avoid extracted keypoints concentrate on some particular area
Summary
Localizing and estimating its ego-motion in 3D space are crucial tasks for autonomous vehicles, mobile robots and Unmanned Aerial Vehicles (UAVs) [1]. These tasks are achieved by using LiDAR’s, monocular and stereo imagery, etc. LiDAR has already played an important role in this researching area. The main drawbacks of LiDAR are: The price of LiDAR is expensive; The weight, power consumption is not affordable for some platforms; As an active sensor, the signal noise of environments and others LiDAR’s emission can affect LiDAR’s performance [2]. As a passive sensor, has its own advantage in information acquirement, which can play an important part in 3D data capturing.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.