Abstract

The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. We report on work performed as part of GridPP[1], extending the Dirac File Catalogue and file management interface to allow the placement of erasure-coded files: each file distributed as N identically-sized chunks of data striped across a vector of storage endpoints, encoded such that any M chunks can be lost and the original file can be reconstructed. The tools developed are transparent to the user, and, as well as allowing up and downloading of data to Grid storage, also provide the possibility of parallelising access across all of the distributed chunks at once, improving data transfer and IO performance. We expect this approach to be of most interest to smaller VOs, who have tighter bounds on the storage available to them, but larger (WLCG) VOs may be interested as their total data increases during Run 2. We provide an analysis of the costs and benefits of the approach, along with future development and implementation plans in this area. In general, overheads for multiple file transfers provide the largest issue for competitiveness of this approach at present.

Highlights

  • WLCG[2] VOs have been distributing data across geographical and administrative boundaries since the very start of the project

  • The well known issues with virtual machine performance compared to bare metal have not been addressed when comparing the measured IO performance in these benchmarks, so they should be treated as purely relative measures of performance

  • Performance was tested on both single and dual-core systems in order to estimate the effect of contention between threads, but the difference between the two systems was small (< 5%) in almost all cases, so only the single-core results are reproduced here

Read more

Summary

Introduction

WLCG[2] VOs have been distributing data across geographical and administrative boundaries since the very start of the project. One of the key advantages of modern Reed-Solomon implementations is that they allow effectively arbitrary selection of the number of chunks in which to section a piece of data, and the number of additional ‘coding chunks’ generated to provide resilience.

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call