Abstract

Within 5 years CMS expects to be managing many tens of petabytes of data at over a hundred sites around the world. This represents more than an order of magnitude increase in data volume over existing HEP experiments. The underlying concepts and architecture of the CMS model for distributed data management will be presented. The technical descriptions of the main data management components for data transfer, dataset bookkeeping, data location and file access will be described. In addition we will present the experience in using the system in CMS data challenges and ongoing MC production.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call