Abstract

The high data rates expected for the next generation of particle physics experiments (e.g.: new experiments at FAIR/GSI and the upgrade of CERN experiments) call for dedicated attention with respect to design of the needed computing infrastructure. The common ALICE-FAIR framework ALFA is a modern software layer, that serves as a platform for simulation, reconstruction and analysis of particle physics experiments. Beside standard services needed for simulation and reconstruction of particle physics experiments, ALFA also provides tools for data transport, configuration and deployment. The FairMQ module in ALFA offers building blocks for creating distributed software components (processes) that communicate between each other via message passing.The abstract "message passing" interface in FairMQ has at the moment three implementations: ZeroMQ, nanomsg and shared memory. The newly developed shared memory transport will be presented, that provides significant per-formance benefits for transferring large data chunks between components on the same node. The implementation in FairMQ allows users to switch between the different transports via a trivial configuration change. The design decisions, im-plementation details and performance numbers of the shared memory transport in FairMQ/ALFA will be highlighted.

Highlights

  • ALFA[1] is a modern C++ software framework for simulation, reconstruction and analysis of particle physics experiments

  • ALFA extends FairRoot[2] to provide building blocks for highly parallelized and data flow driven processing pipelines required by the generation of experiments, such as the upgraded ALICE detector or the FAIR experiments

  • FairMQ[3] is a component in ALFA that provides a C++ Message Queuing Framework that integrates standard industry data transport technologies and provides building blocks for simple creation of data flow actors and pipelines

Read more

Summary

Introduction

ALFA[1] is a modern C++ software framework for simulation, reconstruction and analysis of particle physics experiments. ALFA extends FairRoot[2] to provide building blocks for highly parallelized and data flow driven processing pipelines required by the generation of experiments, such as the upgraded ALICE detector or the FAIR experiments. The generation particle physics experiments, such as FAIR/GSI experiments and the upgrade of ALICE at CERN, will chop their output data into manageable pieces called time frames. The large size of the time frame calls for a inter-process transport via shared memory, that will avoid copying the data. This will improve throughput, and relax the memory size requirement for the processing nodes. In the last three sections we will show what happens when different transports are combined, what the performance looks like and how we ensure a proper cleanup of the memory

Concepts
Design and implementation
Unmanaged region
Connection to other transports
Performance
Memory cleanup
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call