Abstract

CMS Tier 3 centers, frequently located at universities, play an important role in the physics analysis of CMS data. Although different computing resources are often available at universities, meeting all requirements to deploy a valid Tier 3 able to run CMS workflows can be challenging in certain scenarios. For instance, providing the right operating system (OS)with access to the CERNVM File System (CVMFS) on the worker nodes or having a Compute Element (CE) on the submit host is not always allowed or possible due to e.g: lack of root access to the nodes, TCP port network policies, maintenance of a C,etc. The Notre Dame group operates a CMS Tier 3 with 1K cores. In addition to this, researchers have access to an opportunistic pool with +25K cores that are used via lobster for CMS jobs, but cannot be used with other standard CMS submission tools on the grid like CRAB, as these resources are not part of the Tier 3 due to its opportunistic nature. This work describes the use of VC3, a service for automating the deployment of virtual cluster infrastructures, in order to provide the environment (user-space CVMFS access and customized OS via singularity containers) needed for CMS workflows to work. Also, we describe its integration with the OSG Hosted CE service, to add these resources to CMS as part of our existing Tier 3 in a seamless way.

Highlights

  • The Worldwide LHC Computing Grid (WLCG) [1] is composed of 4 layers or "tiers" that provide a specific set of services

  • The Center for Research Computing (CRC) provides an opportunistic campus cluster with over 25,000 cores of computing power researchers have access to, but these resources lack the software components and environment needed by CMS analysis workflows

  • This work describes the use of VC3 [3], a service for automating the deployment of virtual cluster infrastructures and the Open Science Grid (OSG) Hosted Compute Element (CE) service [4], in order to provide the grid environment and components needed to build a CMS Tier 3 using Notre Dame opportunistic campus resources at the user level

Read more

Summary

Introduction

The Worldwide LHC Computing Grid (WLCG) [1] is composed of 4 layers or "tiers" that provide a specific set of services. Local computing resources used to perform the final stages of data analysis by individual university groups are defined as Tier 3s in this infrastructure. The University of Notre Dame high energy physics group operates a CMS Tier 3 with about 1,300 cores for analyzing CMS [2] data submitted locally or through the grid. This work describes the use of VC3 [3], a service for automating the deployment of virtual cluster infrastructures and the OSG Hosted CE service [4], in order to provide the grid environment and components needed to build a CMS Tier 3 using Notre Dame opportunistic campus resources at the user level

VC3: Virtual Clusters for Community Computation
The OSG Hosted CE service
Deploying a CMS Tier 3 with VC3 and the OSG Hosted CE service
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call