Abstract

During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.

Highlights

  • The first data taking run of the Large Hadron Collider (LHC) [1] at CERN in Geneva, Switzerland, started in Fall 2010 and ended in Spring 2013

  • This paper presents the operational experience of the Compact Muon Solenoid (CMS) computing infrastructure during LHC run 1

  • CMS operates the following services based on GRID technologies from the EGI [9], ARC [10] and OSG [11] GRID middlewares under the umbrella of the Worldwide LHC Computing GRID (WLCG) infrastructure [12] to support the execution of the CMS workflows: Transfer system: Files are organized in datasets with similar physics content

Read more

Summary

Introduction

The first data taking run of the Large Hadron Collider (LHC) [1] at CERN in Geneva, Switzerland, started in Fall 2010 and ended in Spring 2013. This paper presents the operational experience of the CMS computing infrastructure during LHC run 1.

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call