Abstract

Light detection and ranging (LiDAR) sensors have become one of the key building blocks to realize metaverse applications with VR/AR in mobile devices and level-5 automotive vehicles. In particular, SPAD-based direct time-of-flight (D-ToF) sensors have emerged as LiDAR sensors because they offer a longer maximum detectable range and higher background light immunity than indirect time-of-flight (I-ToF) sensors with photon-mixing devices [1]. However, their complicated front- and back-end blocks to resolve ToF values as short as 100ps require high-resolution TDCs and several memories, limiting the spatial resolution and the depth accuracy in short ranges. To address this issue, alternative architectures combining both D-ToF and I-ToF techniques have been reported [2, 3]. Direct-indirect-mixed frame synthesis provides accurate depth information by detecting phases in short ranges while creating a sparse depth map with counting photons in long ranges [2]. A two-step histogramming TDC is used in [3] where a coarse D-ToF discriminates distance roughly and a fine I-ToF extracts depth precisely. However, these approaches still suffer from limited depth accuracy [2] or low spatial resolution [3].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call