Abstract

The Alignment, Calibrations and Databases group at the CMS Experiment delivers Alignment and Calibration Conditions Data to a large set of workflows which process recorded event data and produce simulated events. The current infrastructure for releasing and consuming Conditions Data was designed in the two years of the first LHC long shutdown to respond to use cases from the preceding data-taking period. During the second run of the LHC, new use cases were defined.For the consumption of Conditions Metadata, no common interface existed for the detector experts to use in Python-based custom scripts, resulting in many different querying and transaction management patterns. A new framework has been built to address such use cases: a simple object-oriented tool that detector experts can use to read and write Conditions Metadata when using Oracle and SQLite databases, that provides a homogeneous method of querying across all services. The tool provides mechanisms for segmenting large sets of conditions while releasing them to the production database, allows for uniform error reporting to the client-side from the server-side and optimizes the data transfer to the server. The architecture of the new service has been developed exploiting many of the features made available by the metadata consumption framework to implement the required improvements.This paper presents the details of the design and implementation of the new metadata consumption and data upload framework, as well as analyses of the new upload service’s performance as the server-side state varies.

Highlights

  • This paper presents the details of the design and implementation of the new metadata consumption and data upload framework, as well as analyses of the new upload service’s performance as the server-side state varies

  • Several optimisations were implemented with the redesign of the upload service, including splitting each upload over multiple requests; giving the user the option to filter their IOVs by an First Condition Safe Run (FCSR) on the client-side before sending Payloads; and only sending Payloads to the server-side that were not already stored in the destination database

  • FCSR Filtering is not the strongest of the optimisations, it can reduce the number of Payloads to be uploaded significantly and in a small amount of time

Read more

Summary

Introduction

A new framework has been built to address such use cases: a simple object-oriented tool that detector experts can use to read and write Conditions Metadata when using Oracle and SQLite databases, that provides a homogeneous method of querying across all services. For Metadata Uploading the server-side defines the class MetadataHandler, which validates Tags and IOVs during instantiation, a MetadataHandler instance will contain Conditions Metadata that has been validated with respect to the Upload Metadata, that is, the data given by the user either through command line arguments or an Upload Metadata file.

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call