Abstract

Rapid progress in computer technology, processor speed and memory capacity, as well as demand for digital audio and video pose new problems in communication networks. To accommodate a whole range of new applications, the networks must operate at very high speeds, and they often require new protocols. These networks are supposed to employ the connection-oriented paradigm as the only solution that will achieve the desired very high speeds. However, this paradigm, although it does possess certain virtues, is not free from disadvantages that make the effective implementation of such networks difficult. In this dissertation, we study the suitability of a connection-less paradigm in high-speed networks, deflection routing. High-speed isochronous applications, like video, require small deviations from the average interarrival delay (low jitter). We show that deflection networks, equipped with reassembly buffers of modest size, can smooth out the jitter and assure a very low packet loss regardless of the traffic pattern and network topology. Another important feature required by high-speed applications is the ability to send a packet to a number of distinct destinations (multicast). We propose and study several simple multicast schemes providing deflection networks with this important feature. We study the performance of several video applications, including those that require multicasting. We show that in a realistic environment, performance of these applications in deflection networks is very good. Quality of service is another requirement that must be fulfilled by a high-speed network. We propose and investigate a few simple means for sustaining the given throughput of an isochronous stream regardless of the intensity and pattern of the background datagram traffic. We also study the effect that granting priorities to isochronous traffic has on the throughput and jitter. In cases in which there are a very large number of isochronous sources that contribute some traffic to the network, we propose a protocol that allows a source to determine whether its session may be accepted by the network. Possible variations of the interarrival delay caused by the sources trying to initiate their sessions, are easily smoothed out by the receivers that can dynamically adjust their playout buffers. We show, with the example of a high-speed application, that this protocol makes it possible to achieve a required long term quality of service. In general, asynchronous deflection networks offer lower throughput than their synchronous counterparts—this is the price paid for a, more feasible implementation and a less complex routing. We show how transient buffers affect the throughput of an asynchronous network. We also show that the appropriate size of these buffers may greatly improve the throughput of an asynchronous deflection network, making it close to the throughput achievable in a synchronous network. Finally, we compare the performance of a deflection network with the performance of a store-and-forward network. We show that if no resources are reserved in advance. the performance achieved by the deflection network significantly exceeds the performance of a store-and-forward network. At the same time, buffer space requirements in the latter are much higher. (Abstract shortened by UMI.)

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.