Abstract

Achieving high-fidelity image transmission through turbid media is a significant challenge facing both the AI and photonic/optical communities. While this capability holds promise for a variety of applications, including data transfer, neural endoscopy, and multi-mode optical fiber-based imaging, conventional deep learning methods struggle to capture the nuances of light propagation, leading to weak generalization and limited reconstruction performance. To address this limitation, we investigated the non-locality present in the reconstructed images and discovered that conventional deep learning methods rely on specific features extracted from the training dataset rather than meticulously reconstructing each pixel. This suggests that they fail to effectively capture long-range dependencies between pixels, which are crucial for accurate image reconstruction. Inspired by the physics of light propagation in turbid media, we developed a global attention mechanism to approach this problem from a broader perspective. Our network harnesses information redundancy generated by peculiar non-local features across the input and output fiber facets. This mechanism enables a two-order-of-magnitude performance boost and high fidelity to the data context, ensuring an accurate representation of intricate details in a pixel-to-pixel reconstruction rather than mere loss minimization.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.