Abstract

Many scientific workflows require large data transfers between distributed instrument facilities, storage and computing resources. To ensure that these resources are maximally utilized, R&E networks connecting these resources must ensure that an inherently unpredictable network behaves predictably. In practice, this amounts to the per-application over-provisioning of network resources in an attempt to guarantee that adequate throughput is provided to users. This often results in resource under-utilization over time. One promising solution is the use of deadlines and bandwidth calendaring. In this approach, “fair” resource allocation is replaced with deadline-based resource allocation. However, these approaches often suffer from issues in efficiently regulating resource allocation and failure modes. Therefore, our solution, Calibers, approaches bandwidth calendaring and deadline-awareness in a different way. Calibers uses shaping, metering, and pacing at the edge of the network and end-system to provide participating clients the ability to schedule bandwidth reservations without having to worry about network noise from non-participating clients. Calibers can also fail back to the fair resource allocation of underlying transport protocols if necessary. For example, if a non-participating flow somehow enters the core of the network, or a sudden network change causes the available bandwidth to be exceeded, the underlying transport protocol congestion avoidance implementation will be able to handle the congestion as it normally would. Furthermore, Calibers provides a novel simulation method and resource allocation algorithm.In this paper, we present the prototype architecture for Calibers using a central controller with distributed agents to dynamically pace flows at the ingress of the network to meet deadlines. Using Globus/Grid-FTP, we experimentally demonstrate that pacing can be used to meet data transfer deadlines which cannot be achieved using TCP. Finally, we present dynamic flow pacing algorithms that maximize acceptance ratio of flows for which deadlines can be met while maximizing network utilization. Our results show that simple heuristics, optimizing locally on the most bottlenecked link, can perform almost as well as heuristics that attempt to optimize globally.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.