Abstract

Training a neural network to reconstruct images from time-series waveforms obtained from fiber optic probes not only yields high-quality, content-aware images but can also acquire different types of images from lower quality training images. Image reconstruction, as an inverse problem, involves using collected signals and system models to retrieve desired images, encountering mathematical challenges like distortion and degradation. In this paper, we introduce REcaNet, a multi-mode fiber image restoration model based on an enhanced residual convolutional neural network (CNN). The network employs a symmetrical architecture that downscales the image before upscaling it for restoration, and it reconstructs the high-level semantic feature map generated by the encoder to the original image resolution. Additionally, we incorporate weight initialization, attention mechanisms, and residual connections to enhance the final restored feature map with more low-dimensional features and promote fusion of features from distinct layers. The algorithm performs well on three datasets collected by multi-mode fibers, namely Minist, Clothes, and Omiglot. Among them, various indicators such as SSIM have been significantly improved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call