Abstract

Moving computation units closer to sensors is becoming a promising approach to addressing bottlenecks in computing speed, power consumption, and data storage. Pre-sensor computing with optical neural networks (ONNs) allows extensive processing. However, the lack of nonlinear activation and dependence on laser input limits the computational capacity, practicality, and scalability. A compact and passive multilayer ONN (MONN) is proposed, which has two convolution layers and an inserted nonlinear layer, performing pre-sensor computations with designed passive masks and a quantum dot film for incoherent light. MONN has an optical length as short as 5 millimeters, two orders of magnitude smaller than state-of-the-art lens-based ONNs. MONN outperforms linear single-layer ONN across various vision tasks, off-loading up to 95% of computationally expensive operations into optics from electronics. Motivated by MONN, a paradigm is emerging for mobile vision, fulfilling the demands for practicality, miniaturization, and low power consumption.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.