Abstract

Modern photon science performed at high repetition rate free-electron laser (FEL) facilities and beyond relies on 2D pixel detectors operating at increasing frequencies (towards 100 kHz at LCLS-II) and producing rapidly increasing amounts of data (towards TB/s). This data must be rapidly stored for offline analysis and summarized in real time for online feedback to the scientists. While at LCLS all raw data has been stored, at LCLS-II this would lead to a prohibitive cost; instead, enabling real time processing of pixel detector data (dark, gain, common mode, background, charge summing, subpixel position, photon counting, data summarization) allows reducing the size and cost of online processing, offline processing and storage by orders of magnitude while preserving full photon information. This could be achieved by taking advantage of the compressibility of sparse data typical for LCLS-II applications. Faced with a similar big data challenge a decade ago, computer vision stimulated revolutionary advances in machine learning hardware and software. We investigated if these developments are useful in the field of data processing for high speed pixel detectors and found that typical deep learning models and autoencoder architectures failed to yield useful noise reduction while preserving full photon information, presumably because of the very different statistics and feature sets in computer vision and radiation imaging. However, the raw performance of modern frameworks like Tensorflow inspired us to redesign in Tensorflow mathematically equivalent versions of the state-of-the-art, “classical” algorithms used at LCLS. The novel Tensorflow models resulted in elegant, compact and hardware agnostic code, gaining 1 to 2 orders of magnitude faster processing on an inexpensive consumer GPU, reducing by 3 orders of magnitude the projected cost of online analysis and compression without photon loss at LCLS-II. The novel Tensorflow models also enabled ongoing development of a pipelined hardware system expected to yield an additional 3 to 4 orders of magnitude speedup, necessary for meeting the data acquisition and storage requirements at LCLS-II, potentially enabling acquiring every single FEL pulse at full speed. Computer vision a decade ago was dominated by hand-crafted filters; their structure inspired the deep learning revolution resulting in modern deep convolutional networks; similarly, our novel Tensorflow filters provide inspiration for designing future deep learning models for ultrafast and efficient processing and classification of pixel detector images at FEL facilities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call