Abstract

Ghost imaging (GI) illuminates an object with a sequence of light patterns and obtains the corresponding total echo intensities with a bucket detector. The correlation between the patterns and the bucket signals results in the image. Due to such a mechanism different from the traditional imaging methods, GI has received extensive attention during the past two decades. However, this mechanism also makes GI suffer from slow imaging speed and poor imaging quality. In previous work, each sample, including an illumination pattern and its detected bucket signal, was treated independently with each other. The correlation is therefore a linear superposition of the sequential data. Inspired by human's speech, where sequential words are linked with each other by a certain semantic logic and an incomplete sentence could still convey a correct meaning, we here propose a different perspective that there is potentially a non-linear connection between the sequential samples in GI. We therefore built a system based on a recurrent neural network (RNN), called GI-RNN, which enables recovering high-quality images at low sampling rates. The test with MNIST's handwriting numbers shows that, under a sampling rate of 1.28%, GI-RNN have a 12.58 dB higher than the traditional basic correlation algorithm and a 6.61 dB higher than compressed sensing algorithm in image quality. After trained with natural images, GI-RNN exhibits a strong generalization ability. Not only does GI-RNN work well with the standard images such as "cameraman", but also it can recover the natural scenes in reality at the 3% sampling rate while the SSIMs are greater than 0.7.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.