Abstract

In this paper, we analyze the performance of the prototype software developed for the ATLAS Second-Level Trigger. The OO framework written in C++ has been used to implement a distributed system which collects (simulated) detector data on which it executes event selection algorithms. The software has been used on testbeds of up to 100 nodes with various interconnect technologies. The final system will have to sustain traffic of /spl sim/40 Gb/s and require an estimated number of /spl sim/750 processors. Timing measurements are crucial for issues such as trigger decision latency, assessment of required CPU and network capacity, scalability, and load-balancing. In addition, final architectural and technological choices, code optimization, and system tuning require a detailed understanding of both CPU utilization and trigger decision latency. In this paper, we describe the instrumentation used to disentangle effects due to such factors as OS system intervention, blocking on interlocks (applications are multithreaded), multiple CPUs, and I/O. This is followed by an analysis of the measurements and concluding with suggestions for improvements to the ATLAS Trigger/DAQ dataflow components in the next phase of the project.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.