Abstract

The LHCb experiment [1] has taken data between December 2009 and February 2013. The data taking conditions and trigger rate were adjusted several times during this period to make optimal use of the luminosity delivered by the LHC and to extend the physics potential of the experiment. By 2012, LHCb was taking data at twice the instantaneous luminosity and 2.5 times the high level trigger rate than originally foreseen. This represents a considerable increase in the amount of data which had to be handled compared to the original Computing Model from 2005, both in terms of compute power and in terms of storage. In this paper we describe the changes that have taken place in the LHCb computing model during the last 2 years of data taking to process and analyse the increased data rates within limited computing resources. In particular a quite original change was introduced at the end of 2011 when LHCb started to use for reprocessing compute power that was not co-located with the RAW data, namely using Tier2 sites and private resources. The flexibility of the LHCbDirac Grid interware allowed easy inclusion of these additional resources that in 2012 provided 45% of the compute power for the end-of-year reprocessing. Several changes were also implemented in the Data Management model in order to limit the need for accessing data from tape, as well as in the data placement policy in order to cope with a large imbalance in storage resources at Tier1 sites. We also discuss changes that are being implemented during the LHC Long Shutdown 1 (LS1) to prepare for a further doubling of the data rate when the LHC restarts at a higher energy in 2015.

Highlights

  • Power required for continuous processing of 2012 data roughly equivalent to power required for reprocessing of 2011 data at end of year

  • ❍ In LHCb computing model, user analysis jobs requiring input data are executed at sites holding the data on disk

  • ❍ Tier2Ds are a limited set of Tier2 sites which are allowed to provide disk capacity for LHCb

Read more

Summary

Tracking System

❍ Computing resources scale linearly with trigger rate and length of run LHCb Computing Model (TDR). ❏ First pass reconstruction runs democratically at CERN+Tier1s ❏ End of year reprocessing of complete year’s dataset. ❏ Input to user analysis and further centralised processing by analysis working groups. ✰ User analysis runs at any Tier1 ✰ Users do not have access to RAW data or unstripped

Problems with TDR model
Going beyond the Grid paradigm
Changes to data management model
Data Formats
Data placement of DSTs
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call