Abstract

The ability to generate and handle quantum correlations in twin-beam states is the foundation to overcome the imaging limitations imposed by classical light [1], relying on efficient detectors able to select temporal correlations among photons with maximized signal-to-noise ratio. Indeed, quantum information is contained in the interaction with the sample of just two photons, thus losing even one of them or mixing them up with background can completely compromise the experiment. To this end, assessing and evaluating the expected probability of measurement errors is of utmost importance to optimize the detector design. A typical experimental setup of a quantum imaging system based on Spontaneous Parametric Down Conversion (SPDC), the most effective way of producing temporally and spatially correlated entangled photons [2], involves a pump laser hitting a non-linear birefringent crystal producing a down-converted signal, some passive optical components to generate polarization-entangled photons and a pixelated photon detector that eventually reveals the position of coincident entangled photons. The overall detection efficiency of the measurement setup (i.e. heralding efficiency η <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">H</inf> ) gathers all its possible losses and non-idealities of the mentioned setup components, namely the crystal conversion efficiency, the optical losses, the detector Photon Detection Efficiency (PDE) and fill-factor. In this study, we consider a detection based on a Single Photon Avalanche Diode (SPAD) array, able to detect coincidences on-chip and with event-driven architecture, which provides fast readout, low power consumption and optimized data throughput. However, related to every on-chip coincidence detection and event-driven based architecture, some operations require a certain amount of time to be performed that, combined with the generation of undesired single photons (both due to limited heralding efficiency and SPAD Dark Count Rate, DCR), yield to three measurement errors, here described with their related probabilities. Firstly, due to the finite duration of the coincidence window ∆t <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">COINC</inf> , there exist a probability P <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">false</inf> of wrongly detecting as coincident photons two single photons, and can be expressed as:\begin{equation*}{P_{false}} = \frac{{\left[ {2{\mu _{SOURCE{\text{ }}}}{\eta _H}\left( {1 - {\eta _H}} \right) + {\text{DCR}}} \right]\left( {1 - {e^{ - \left[ {2{\mu _{SOURCE}}{\eta _H}\left( {1 - {\eta _H}} \right) + {\text{DCR}}} \right]\Delta {t_{COINC}}}}} \right)}}{{{\mu _{SOURCE}}\eta _H^2 + \left[ {2{\mu _{SOURCE}}{\eta _H}\left( {1 - {\eta _H}} \right) + DCR} \right]\left( {1 - {e^{ - \left[ {2{\mu _{SOURCE}}{\eta _H}\left( {1 - {\eta _H}} \right) + {\text{DCR}}} \right]\Delta {t_{COINC}}}}} \right)}}\tag{1}\end{equation*}where µ <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">SOURCE</inf> is the number of photon pairs generated by an ideal non-linear crystal. Secondly, in the time needed for the array to "freeze" the pixels state after a coincidence event (∆ <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">tFREEZE</inf> ) so that they can be readout, there is the probability P <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">spurious</inf> of detecting a third spurious event after a coincidence has been detected, which is expressed as:\begin{equation*}{P_{spurious}} = 1 - {e^{ - \left[ {2{\mu _{SOURCE\eta }}_H\left( {1 - {\eta _H}} \right) + DCR} \right]\Delta {t_{FREEZE}}}}\tag{2}\end{equation*}

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call