Abstract

The continual evolution of photon sources and high-performance detectors drives cutting-edge experiments that can produce very high throughput data streams and generate large data volumes that are challenging to manage and store. In these cases, efficient data transfer and processing architectures thatallow online image correction, data reduction or compression become fundamental. This work investigates different technical options and methods fordata placement from the detector head to the processing computing infrastructure, taking into account the particularities of modern modular high-performance detectors. In order to compare realistic figures, the future ESRF beamline dedicated to macromolecular X-ray crystallography, EBSL8, is taken as an example, which will use a PSI JUNGFRAU 4M detector generating up to16 GB of data per second, operating continuously during several minutes. Although such an experiment seems possible at the target speed with the 100 Gb s-1 network cards that are currently available, the simulations generated highlight some potential bottlenecks when using a traditional software stack. An evaluation of solutions is presented that implements remote direct memory access (RDMA) over converged ethernet techniques. A synchronization mechanism is proposed between a RDMA network interface card (RNIC) and a graphics processing unit (GPU) accelerator in charge of the online data processing. The placement of the detector images onto the GPU is made to overlap with the computation carried out, potentially hiding the transfer latencies. As a proof of concept, a detector simulator and a backend GPU receiver with a rejection and compression algorithm suitable for a synchrotron serial crystallography (SSX) experiment are developed. It is concluded that the available transfer throughput from the RNIC to the GPU accelerator is at present the major bottleneck in online processing for SSX experiments.

Highlights

  • Many X-ray experiments at synchrotron radiation and freeelectron laser facilities already produce data streams at throughput rates that are beyond the capacity of classic computer architectures

  • We have evaluated the feasibility of remote direct memory access (RDMA) data transfer in the frame of high-performance X-ray detector development using gigabit ethernet data links after confirming the flaws of the standard socket API at 100 Gb sÀ1

  • It appears that oneway communication, a major challenge to stay compatible with detector embedded electronics, is possible using RoCEv2 unreliable datagram queue pair and WRITE verb API and commercially available RDMA network interface card (RNIC) in data-receiver computers

Read more

Summary

Introduction

Many X-ray experiments at synchrotron radiation and freeelectron laser facilities already produce data streams at throughput rates that are beyond the capacity of classic computer architectures. The imbalance between the potential of analysis systems and the level of data flux issued from detectors is further increased by various synchrotron upgrades or enhancements in data-collection methods such as fine slicing and continuous acquisition (Willmott, 2019). New generation X-ray detectors have become available, featuring larger sensor areas, higher dynamic ranges, higher pixel densities and faster acquisition rates. This leads to a big data challenge and enforces the use of innovative software and hardware (Leonarski et al, 2020).

Objectives
Methods
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call