Abstract

LiDAR has high accuracy and resolution and is widely used in various fields. In particular, phase-modulated continuous-wave (PhMCW) LiDAR has merits such as low power, high precision, and no need for laser frequency modulation. However, with decreasing signal-to-noise ratio (SNR), the noise on the signal waveform becomes so severe that the current methods to extract the time-of-flight are no longer feasible. In this paper, a novel method that uses deep neural networks to measure the pulse width is proposed. The effects of distance resolution and SNR on the performance are explored. Recognition accuracy reaches 81.4% at a 0.1 m distance resolution and the SNR is as low as 2. We simulate a scene that contains a vehicle, a tree, a house, and a background located up to 6 m away. The reconstructed point cloud has good fidelity, the object contours are clear, and the features are restored. More precisely, the three distances are 4.73 cm, 6.00 cm, and 7.19 cm, respectively, showing that the performance of the proposed method is excellent. To the best of our knowledge, this is the first work that employs a neural network to directly process LiDAR signals and to extract their time-of-flight.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call