Abstract
This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times for jobs using this data.
Highlights
In this module the data popularity and the predicted future usage intensities are used to estimate which datasets should be kept on disk and how many replicas they should have
The LHCb collaboration is one of the four major experiments at the Large Hadron Collider at CERN
In the results section we show a comparison of our algorithm with a simple Last Recently Used (LRU) algorithm
Summary
In this module the data popularity and the predicted future usage intensities are used to estimate which datasets should be kept on disk and how many replicas they should have. Dataset usage history represents as time series of 104 points. The Nadaraya-Watson equation for kernel smoothing with LOO smoothing window width optimization is applied to time series of dataset usage history.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have