Abstract

The Compact Muon Solenoid (CMS) experiment dedicates significant effort to assess the quality of its data, online and offline. A real-time data quality monitoring system is in place to spot and diagnose problems as promptly as possible to avoid data loss. The a posteriori evaluation of processed data is designed to categorize it in terms of their usability for physics analysis. These activities produce data quality metadata. The data quality evaluation relies on a visual inspection of the monitoring features. This practice has a cost in term of human resources and is naturally subject to human arbitration. Potential limitations are linked to the ability to spot a problem within the overwhelming number of quantities to monitor, or to the lack of understanding of detector evolving conditions. In view of Run 3, CMS aims at integrating deep learning technique in the online workflow to promptly recognize and identify anomalies and improve data quality metadata precision. The CMS experiment engaged in a partnership with IBM with the objective to support, through automatization, the online operations and to generate benchmarking technological results. The research goals, agreed within the CERN Openlab framework, how they matured in a demonstration applic tion and how they are achieved, through a collaborative contribution of technologies and resources, are presented

Highlights

  • Data Quality (DQ) assessment is an important aspect of every High Energy Physics (HEP) experiment

  • In the era of the Large Hadron Collider (LHC)[1], and of its highly sophisticated detectors, a prompt feedback on the quality of the recorded data is needed to maximize the effectiveness of data taking efforts, and offline quality verification is required to guarantee a good baseline for physics analysis

  • The Compact Muon Solenoid (CMS) experiment engaged in a partnership with the IBM company, with CERN Openlab as facilitator, to investigate the benefits of a future Machine Learning (ML)-based quality monitoring

Read more

Summary

Introduction

Data Quality (DQ) assessment is an important aspect of every High Energy Physics (HEP) experiment. Following the CERN’s [2] long standing tradition of collaboration with industry and research institutes, the CMS experiment[3] engaged into a. This document outlines, in 2, the particular physics requirements that govern the choice and design of the CMS-IBM project, while 3 describes in some detail the properties of the Machine Learning (ML) methods exploited in the two use cases CMS has studied. The preliminary results, presented at the conference and summarized in 4, illustrate the progresses achieved over the past few years and anticipate the way the future implementation will take place in everyday operations

The project
Project Use cases and state of the Art
ECAL: supervised learning
ECAL: semi-supervised learning
Summary and Future plans
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.