Abstract

The Trigger and Data Acquisition system of the ATLAS [1] experiment at the Large Hadron Collider [2] at CERN is composed of a large number of distributed hardware and software components which provide the data-taking functionality of the overall system. During data-taking, huge amounts of operational data are created in order to constantly monitor the system. The Persistent Back-End for the ATLAS Information System of TDAQ (P-BEAST) is a system based on a custom-built time-series database. It archives any operational monitoring data published online, resulting in about 18 TB of highly compacted and compressed raw data per year. P-BEAST provides command line and programming interfaces for data insertion and retrieval, including integration with the Grafana platform. Since P-BEAST was developed, several promising database technologies for efficiently working with time-series data have been made available. A study to evaluate the possible use of these recent database technologies in the P-BEAST system was performed. First, the most promising technologies were selected. Then, their performance was evaluated. The evaluation strategy was based on both synthetic read and write tests, and on realistic read patterns (e.g., providing data to a set of Grafana dashboards currently used to monitor ATLAS). All the tests were executed using a subset of ATLAS operational monitoring data, archived during the LHC Run II. The details of the testing procedure and of the testing results, including a comparison with the current P-BEAST service, are presented.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call