Abstract

Due to the bufferless nature of OBS networks, random burst losses may occur, even at low traffic loads. For optical burst-switched (OBS) networks in which TCP is implemented at a higher layer, these random burst losses may be mistakenly interpreted by the TCP layer as congestion in the network, leading to serious degradation of the TCP performance. In this paper, we reduce random burst losses by a burst retransmission scheme in which the bursts lost due to contention in the OBS network are retransmitted at the OBS layer. The OBS retransmission scheme can then reduce the probability that the TCP layer falsely detects congestion, thereby improving the TCP throughput. We analyze the TCP throughput when OBS networks employ the burst retransmission scheme and develop a simulation model to validate the analytical results. Based on our simulation results, we show that an OBS layer with burst retransmission provides an improvement of up to ten times the TCP throughput over an OBS layer without burst retransmission. This significant improvement is primarily because the TCP layer triggers fewer time-out based retransmissions when the OBS retransmission scheme is used

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call