Abstract

Nano-optic imagers that modulate light at sub-wavelength scales could enable new applications in diverse domains ranging from robotics to medicine. Although metasurface optics offer a path to such ultra-small imagers, existing methods have achieved image quality far worse than bulky refractive alternatives, fundamentally limited by aberrations at large apertures and low f-numbers. In this work, we close this performance gap by introducing a neural nano-optics imager. We devise a fully differentiable learning framework that learns a metasurface physical structure in conjunction with a neural feature-based image reconstruction algorithm. Experimentally validating the proposed method, we achieve an order of magnitude lower reconstruction error than existing approaches. As such, we present a high-quality, nano-optic imager that combines the widest field-of-view for full-color metasurface operation while simultaneously achieving the largest demonstrated aperture of 0.5 mm at an f-number of 2.

Highlights

  • Nano-optic imagers that modulate light at sub-wavelength scales could enable new applications in diverse domains ranging from robotics to medicine

  • Imagers that are an order of magnitude smaller could enable numerous novel applications in nano-robotics, in vivo imaging, AR/VR, and health monitoring

  • We turn towards computationally designed metasurface optics to close this gap and enable ultra-compact cameras that could facilitate new capabilities in endoscopy, brain imaging, or in a distributed fashion as collaborative optical “dust” on scene surfaces

Read more

Summary

Introduction

Nano-optic imagers that modulate light at sub-wavelength scales could enable new applications in diverse domains ranging from robotics to medicine. Suffer from an order of magnitude higher reconstruction error than achievable with refractive compound lenses due to severe, wavelength-dependent aberrations that arise from discontinuities in their imparted phase[2,5,10,11,12,13,14,15,16]. Approaches that support wide FOV typically rely on either small input apertures that limit light collection[24] or use multiple metasurfaces[11], which drastically increases fabrication complexity. These multiple metasurfaces are separated by a gap that scales linearly with the aperture, obviating the size benefit of meta-optics as the aperture increases

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.