Abstract

Ghost imaging (GI) facilitates image acquisition under low-light conditions by single-pixel measurements and thus has great potential in applications in various fields ranging from biomedical imaging to remote sensing. However, GI usually requires a large amount of single-pixel samplings in order to reconstruct a high-resolution image, imposing a practical limit for its applications. Here we propose a far-field super-resolution GI technique that incorporates the physical model for GI image formation into a deep neural network. The resulting hybrid neural network does not need to pre-train on any dataset, and allows the reconstruction of a far-field image with the resolution beyond the diffraction limit. Furthermore, the physical model imposes a constraint to the network output, making it effectively interpretable. We experimentally demonstrate the proposed GI technique by imaging a flying drone, and show that it outperforms some other widespread GI techniques in terms of both spatial resolution and sampling ratio. We believe that this study provides a new framework for GI, and paves a way for its practical applications.

Highlights

  • Conventional imaging methods exploit the light reflected or scattered by an object to form its image on a twodimensional sensor that has millions of pixels

  • Inspired by the idea of deep image prior (DIP), here we propose a new ghost imaging (GI) technique that incorporates the physical model of GI image formation into a deep neural network (DNN)

  • One can clearly see that all the binary objects have been successfully reconstructed by GI using Deep neural network Constraint (GIDC), with the number of measurements as low as 256 (β = 6.25%)

Read more

Summary

Introduction

Conventional imaging methods exploit the light reflected or scattered by an object to form its image on a twodimensional sensor that has millions of pixels. Neither detector directly records a resolvable image of the object, one can employ an intuitive linear algorithm to reconstruct its image by spatial correlating the acquired time-varying patterns and the synchronized bucket signal. To obtain an N-pixel image one needs at least M = N measurements to meet β = M/N = 100%, where β represents the sampling ratio (the Nyquist sampling criterion) In many applications such as remote sensing[10], a rotating ground glass (RGG) is frequently used

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call