Abstract
Foundational software libraries such as ROOT are under intense pressure to avoid software regression, including performance regressions. Continuous performance benchmarking, as a part of continuous integration and other code quality testing, is an industry best-practice to understand how the performance of a software product evolves. We present a framework, built from industry best practices and tools, to help to understand ROOT code performance and monitor the efficiency of the code for several processor architectures. It additionally allows historical performance measurements for ROOT I/O, vectorization and parallelization sub-systems.
Highlights
In the last decade, software development has grown in size and complexity
We want to introduce to the reader some sample microbenchmarks for the ROOTBench system; for checking scalability, we present two examples, concerning vectorization and threading
Each Grafana panel is tied to a specific data source that belongs to a particular Grafana organization
Summary
Software development has grown in size and complexity. To counteract potential defects due to this complexity, industry best practices include extensive quality testing, often done at regular intervals (e.g., “nightly builds") or before a change is accepted into the code base. It is important to understand changes in performance over time given performance regressions may be classified as critical for experiments’ software stacks. Metrics such as thread scalability or usage of vectorization have become critical to the field. We discuss the challenges of developing a continuous performance monitoring system for ROOT; argue for the need of small-scale benchmarks; and discuss ways to visualize the obtained data. We describe one such solution, the ROOTBench system providing:.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have