Abstract
The number of applications that use depth imaging is increasing rapidly, e.g. self-driving autonomous vehicles and auto-focus assist on smartphone cameras. Light detection and ranging (LIDAR) via single-photon sensitive detector (SPAD) arrays is an emerging technology that enables the acquisition of depth images at high frame rates. However, the spatial resolution of this technology is typically low in comparison to the intensity images recorded by conventional cameras. To increase the native resolution of depth images from a SPAD camera, we develop a deep network built to take advantage of the multiple features that can be extracted from a camera's histogram data. The network is designed for a SPAD camera operating in a dual-mode such that it captures alternate low resolution depth and high resolution intensity images at high frame rates, thus the system does not require any additional sensor to provide intensity images. The network then uses the intensity images and multiple features extracted from down-sampled histograms to guide the up-sampling of the depth. Our network provides significant image resolution enhancement and image denoising across a wide range of signal-to-noise ratios and photon levels. Additionally, we show that the network can be applied to other data types of SPAD data, demonstrating the generality of the algorithm.
Highlights
Light detection and ranging, where a pulse of light is used to illuminate a target and a detector provides time-of-flight information, is one of the leading technologies for depth imaging
This paper is organized as follows: in Section 2, we provide a brief overview of the Single-photon avalanche detector (SPAD) array sensor, the model of photon detection, and we present the processing done to the SPAD data to extract useful information prior to the reconstruction via the network
We develop a network suitable for a SPAD array sensor, the Quantic 4x4 sensor, that generates a histograms of counts on-chip and operates in a hybrid acquisition mode [8,9,10]
Summary
Light detection and ranging (lidar), where a pulse of light is used to illuminate a target and a detector provides time-of-flight information, is one of the leading technologies for depth imaging. In the context of lidar, several different SPAD array sensors have been developed, see [1, 2] for recent examples They have been used to measure depth in a range of scenarios, including under water [3, 4], long range [5,6,7], at high speed [8,9,10,11], and providing high-resolution depth information [2, 12]. The SPAD camera alternates between two modes at over 1000 frames per second It provides high-resolution intensity images at a resolution of 256x128 pixels followed by low-resolution 64x32x16 histogram of photon counts containing depth information. This paper is organized as follows: in Section 2, we provide a brief overview of the SPAD array sensor, the model of photon detection, and we present the processing done to the SPAD data to extract useful information prior to the reconstruction via the network.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have