Abstract
The recent progress in the development of measurement systems for autonomous recognition had a substantial impact on emerging technology in numerous fields, especially robotics and automotive applications. In particular, time-of-flight (TOF) based light detection and ranging (LiDAR) systems enable to map the surrounding environmental information over long distances and with high accuracy. The combination of advanced LiDAR with an artificial intelligence platform allows enhanced object recognition and classification, which however still suffers from limitations of inaccuracy and misidentification. Recently, multi-spectral LiDAR systems have been employed to increase the object recognition performance by additionally providing material information in the short-wave infrared (SWIR) range where the reflection spectrum characteristics are typically very sensitive to material properties. However, previous multi-spectral LiDAR systems utilized band-pass filters or complex dispersive optical systems and even required multiple photodetectors, adding complexity and cost. In this work, we propose a time-division-multiplexing (TDM) based multi-spectral LiDAR system for semantic object inference by the simultaneous acquisition of spatial and spectral information. By utilizing the TDM method, we enable the simultaneous acquisition of spatial and spectral information as well as a TOF based distance map with minimized optical loss using only a single photodetector. Our LiDAR system utilizes nanosecond pulses of five different wavelengths in the SWIR range to acquire sufficient material information in addition to 3D spatial information. To demonstrate the recognition performance, we map the multi-spectral image from a human hand, a mannequin hand, a fabric gloved hand, a nitrile gloved hand, and a printed human hand onto an RGB-color encoded image, which clearly visualizes spectral differences as RGB color depending on the material while having a similar shape. Additionally, the classification performance of the multi-spectral image is demonstrated with a convolution neural network (CNN) model using the full multi-spectral data set. Our work presents a compact novel spectroscopic LiDAR system, which provides increased recognition performance and thus a great potential to improve safety and reliability in autonomous driving.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.