Depth measurements are vital for many emerging technologies with applications in augmented reality, robotics, gesture detection, and facial recognition. These applications, however, demand compact and low-power systems beyond the capabilities of many state-of-the-art depth cameras. While active illumination techniques can enable precise scene reconstruction, they increase power consumption, and systems that employ stereo require extended form factors to separate viewpoints. Here, we exploit a single, spatially multiplexed aperture of nanoscatterers to demonstrate a solution that replicates the functionality of a high-performance depth camera typically comprising a spatial light modulator, polarizer, and multiple lenses. Using cylindrical nanoscatterers that can arbitrarily modify the phase of an incident wavefront, we passively encode two complementary optical responses to depth information in a scene. The designed optical metasurfaces simultaneously generate a focused accelerating beam and a focused rotating beam that exploit wavefront propagation-invariance to produce paired, adjacent images with a single camera snapshot. Compared to conventional depth from defocus methods, this technique enhances both the depth precision and depth of field at the same time. By decoding the captured data in software, our system produces a fully reconstructed image and transverse depth map, providing an optically passive ranging solution. In our reconstruction algorithm, we account for the field curvature of our metasurface by calculating the change in Gouy phase over the field of view, enabling a fractional ranging error of 1.7%. We demonstrate a precise, visible wavelength, and polarization-insensitive metasurface depth camera with a compact 2 mm2 aperture.
Read full abstract