Abstract

Single-photon avalanche diode (SPAD) arrays are solid-state detectors that offer imaging capabilities at the level of individual photons, with unparalleled photon counting and time-resolved performance. This fascinating technology has progressed at a very fast pace in the past 15 years, since its inception in standard CMOS technology in 2003. A host of architectures have been investigated, ranging from simpler implementations, based solely on off-chip data processing, to progressively “smarter” sensors including on-chip, or even pixel level, time-stamping and processing capabilities. As the technology has matured, a range of biophotonics applications have been explored, including (endoscopic) FLIM, (multibeam multiphoton) FLIM-FRET, SPIM-FCS, super-resolution microscopy, time-resolved Raman spectroscopy, NIROT and PET. We will review some representative sensors and their corresponding applications, including the most relevant challenges faced by chip designers and end-users. Finally, we will provide an outlook on the future of this fascinating technology.

Highlights

  • Individual single-photon avalanche diodes (SPADs) have long been the detector of choice when deep subnanosecond timing performance is required, due to their excellent single-photon detection and timestamping capability[1,2,3,4]

  • The applicability of Single-photon avalanche diode (SPAD) arrays for fluorescence lifetime imaging (FLIM) was limited in the early implementations by the relatively low photon detection probability (PDP) and fill factor, combined with high a dark count rate (DCR)

  • As the previous discussion has indicated, the fundamental differences between electronmultiplying charge-coupled devices (EMCCDs), sCMOS and SPAD imagers do need to take into account the different noise contributions and achievable frame rates, in addition to the overall sensitivity

Read more

Summary

Conclusions

Fully integrated SPAD imagers developed for NIROT show a substantial increase in the array resolution and photon throughput due to on-chip time-stamping and histogram generation, over and beyond what can be obtained by composing arrays of single devices or SiPMs. 3D-stacking The combination of a top sensor layer with a bottom (likely all-digital CMOS) control and processing layer, each optimised for the respective function, can be achieved with 3D-stacking techniques (see Fig. 7a for a concept image); these techniques are progressively becoming accessible to a larger user community and benefit from the developments in consumer markets (e.g. cameras for mobile phone applications), where significant resources are available Such an approach could potentially enable a high PDE, low DCR and reduced jitter and afterpulsing, while adding advanced functionality and low power consumption due to the use of smaller technology nodes in the bottom-tier. This plot indicates a move towards higher PDE values, this is not easy to achieve while still maintaining a low DCR (an ideal sensor would be placed in the bottom right corner of this plot)

Findings
Conclusions and outlook
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call