Abstract

For federated storage to work well, some knowledge from each storage system must exist outside that system, regardless of the use case. This is needed to allow coordinated activity; e.g., executing analysis jobs on worker nodes with good accessibility to the data. Currently, this is achieved by clients notifying central services of activity; e.g., a client notifies a replica catalogue after an upload. Unfortunately, this forces end users to use bespoke clients. It also forces clients to wait for asynchronous activities to finish. dCache provides an alternative approach: storage events. In this approach the storage systems (rather than the clients) become the coordinating service, notifying interested parties of key events. At DESY, we are investigating storage events along with Apache OpenWhisk and Kubernetes to build a "serverless" cloud, similar to AWS Lambda or Google Cloud Functions, for photon science use cases. Storage events are more generally useful: catalogues are notified whenever data is uploaded or delete, tape becomes more efficient because analysis can start immediately after the data is on disk, caches can be "smart" fetching new datasets preemptively. In this paper we will present work within dCache to support a new event-based interface, with which these and other use cases become more efficient.

Highlights

  • The behaviour of all network accessible storage systems is constrained by the data access protocol(s) they implement

  • We introduce a new way of interacting with scientific storage: storage events

  • In this paper we have described storage events, a powerful new concept for managing stored data

Read more

Summary

Introduction

The behaviour of all network accessible storage systems is constrained by the data access protocol(s) they implement. They contain multiple elements that describe different parts of the clients request (e.g., the response to a WebDAV PROPFIND [1] request, or an FTP MLSD [2] request). Even in these cases, the response is finite, bounded, and the direct result of a specific request. This is where the client makes an initial request that creates a job within the server, which contains the details of the desired operation. The SRM v2.2 protocol [3, 4] includes such asynchronous operations

Problems with standard approach
Server-Sent Events
Comparison between Kafka and SSE
Processing data on ingest
Automated data replication
File replicated
Additional use cases
Conclusions and future work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call