Abstract
The LHCb Data Acquisition system reads data from over 300 read-out boards and distributes them to more than 1500 event-filter servers. It uses a simple push-protocol over Gigabit Ethernet. After filtering, the data is consolidated into files for permanent storage using a SAN-based storage system. Since the beginning of data-taking many lessons have been learned and the reliability and robustness of the system has been greatly improved. We report on these changes and improvements, their motivation and how we intend to develop the system for Run 2. We also will report on how we try to optimise the usage of CPU resources during the running of the LHC ("deferred triggering") and the implications on the data acquisition.
Highlights
O(106) Front-end channels 300 Read-out Boards with 4 x 1 Gbit/s network links 1 Gbit/s based Read-out network 1500 Farm PCs >5000 UTP Cat 6 links 1 MHz read-out rate Data is pushed to the Event
– Trigger software is served from central servers, but cached locally on farm node
– LHCb does not run at full LHC instantaneous luminosity – By continuously adjusting beams we do not suffer beam depletion over time – We have to store more data than anticipated per fill More disks mean more throughput!
Summary
– Readout rate: 1 MHz – Up to 16 consecutive triggers – Total event size: 35 kB – HLT output rate: 2000 Hz – HLT output bandwidth: 80 MB/s. – Monolithic Disk array – Good redundancy in data writers – Weak redundancy in File Systems and NFS/Samba servers
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have