From its inception a principal goal of the Next Generation Internet (NGI) has been to find a way to provide reliable, scalable, cost effective, and deployable delivery of data with quality of service (QoS) guarantees as a foundation for innovative NGI applications. But while schemes to provide reliable end-to-end QoS are being actively pursued on a number of fronts, the well known problems with the scalability, cost, and deployability of end-to-end QoS continue to obstruct progress toward achieving this end. Our research focuses on the nature and potential value of an approach to providing QoS that builds on a strategy for allowing NGI applications to dynamically manage remote storage resources in order to stage data locally for later delivery. We call this strategy logistical Quality of Service ( logistical QoS). The concept of logistical QoS is a generalization of the typical end-to-end model for reserving QoS, permitting much more flexible use of buffering of messages in order to achieve QoS delivery without difficult end-to-end requirements. Whenever data is available to be sent well before it needs to be received, it can be staged, i.e. moved in advance to a location close to the receiver for later delivery. Isolating the act of buffering data as a distinct operation, independent of delivery to the receiver, opens up a new dimension of freedom in the management of communication and storage resources that can offer NGI application developers a wide variety of new opportunities to innovate. Our project focuses on the development of logistical QoS as enabling network functionality for application-driven staging and scheduling of distributed computation on NGI. It is divided into two parts: (1) research on the basic network functionality that is required to support logistical QoS that is reliable, scalable, cost effective, and easy to use; and (2) research that investigates the integration of logistical QoS and the basic network technology that underlies it with the scheduling of distributed computations using NetSolve and the Network Weather Service. Our work on logistical QoS focuses on the Internet Backplane, as providing a mechanism for managing remote storage resources, and the Internet Backplane Protocol, as enabling technology for using that mechanism. The idea underlying the concept of the Internet Backplane is that NGI will enable us to consider the global network as an extension of the processor backplane, if only we have a low overhead mechanism for fine grained naming and access to data, analogous to physical addresses and bus transfers. By this analogy the Internet Backplane is a common namespace for fine-grained management of distributed resources. The IBP provides a flexible interface to enable this functionality, allowing reliable and flexible control of remote storage buffers through a general scheme for naming, staging, delivering and protecting data. We will test logistical QoS as an enabling technology for NGI computing using NetSolve. NetSolve is a software environment for networked computing designed to transform disparate computers and software components into a unified, easy-to-access computational service; it is being used by NSF's Partnerships for Advanced Computational Infrastructure to build high-performance systems for distributed computation on leading edge networks. We will investigate the implementation of logistical QoS within NetSolve and the integration of IBP with the Network Weather Service (used to monitor and forecast the performance of network and computational resources) to build a scheduling capability that maximizes the performance of NetSolve across next generation networks.
Read full abstract