Abstract

Big Data has gained interests in effectively capturing, storing, analysis and visualisation from wide range of scientific, economic and business communities and is frequently communicated over internet for various purposes among government and enterprise sectors sited at different locations. Several experiments and analyses have shown that currently employed applications and transport protocols in internet are not suitable for transferring such voluminous data because of not addressing requirements of low-access latency. This paper presents issues associated with the basic mechanism of legacy protocols in the context of high speed networks for transferring Big Data e.g. conservative TCP congestion control mechanism may result in minute utilisation of high bandwidth provisioning networks. We present state-of-art alternatives proposed in the literature to solve these problems in high speed networks. We compare several underlying emerging alternatives of TCP, UDP and multi-TCP-streams protocols over a number of comparison criteria e.g. protocol convergence, responsiveness etc., to handle communication of huge data. We note that these protocol alternatives have significant importance over fulfilling requirements of emerging data-intensive applications in high-speed networks. In addition, we discuss open research issues and challenges that can be explored as a source of motivation towards development and deployment of data-intensive applications in emerging networking technologies.

Highlights

  • The evolution of the Internet has made it possible for the deployment of high speed networks with the increasing use of data-intensive and high-performance applications, such as those used in scientific fields e.g. astronomy [1], meteorology [2], social computing [3], bioinformatics [4] [5] and computational biology [6] [7] [8]

  • The transport layer protocols can be broadly categorised into two categories [10] i.e. connection-oriented, reliable protocols e.g. Transmission Control Protocol (TCP) and connectionless, unreliable protocols e.g. User Datagram Protocol (UDP)

  • Performance-Adaptive UDP (PA-UDP) is vulnerable to high packet losses since it uses the packet loss to calculate the sending rate, as is done by TCP, which may result in reduced throughput

Read more

Summary

INTRODUCTION

The evolution of the Internet has made it possible for the deployment of high speed networks with the increasing use of data-intensive and high-performance applications, such as those used in scientific fields e.g. astronomy [1], meteorology [2], social computing [3], bioinformatics [4] [5] and computational biology [6] [7] [8]. We analyse various transport layer protocols for the following performance parameters: the Congestion control i.e. used to adjust the data transmission rate as a response to segment loss, RTT unfairness (i.e. identifying multiple flows with different RTTs that consume unfair bandwidth share); the Inter-protocol fairness requires that, between two different protocols, one protocol’s flow does not receive larger share of the network bandwidth than a comparable flow of another protocol; the Intra-protocol fairness i.e. requires that two flows of the same protocol share the available network bandwidth, TCP friendliness i.e. requires that a new protocol shares the network than a comparable TCP flow We further evaluate these protocols for the following performance metrics: Throughput i.e. the rate of successful data delivery over a communication channel, End-to-end delay i.e. the time taken for a packet to be transmitted across a network from source to destination, Packet loss and Jitter i.e. variations in latency in the variability over time of packet latency across a network.

BACKGROUND
UDP BASED DATA TRANSFER PROTOCOLS
COMPARISON OF PROTOCOLS OF MULTIPLE TCP STREAMS
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call