Abstract

As an optical-based classifier of the physical neural network, the independent diffractive deep neural network (D2NN) can be utilized to learn the single-view spatial featured mapping between the input lightfields and the truth labels by preprocessing a large number of training samples. However, it is still not enough to approach or even reach a satisfactory classification accuracy on three-dimensional (3D) targets owing to already losing lots of effective lightfield information on other view fields. This Letter presents a multiple-view D2NNs array (MDA) scheme that provides a significant inference improvement compared with individual D2NN or Res-D2NN by constructing a different complementary mechanism and then merging all base learners of distinct views on an electronic computer. Furthermore, a robust multiple-view D2NNs array (r-MDA) framework is demonstrated to resist the redundant spatial features of invalid lightfields due to severe optical disturbances.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.