Abstract

Optical data center networks (ODCNs) are constructed by connecting geographically distributed data centers with optical networks. Due to the superiorities in computing and transmission performances, ODCNs have been widely acknowledged as a promising network paradigm to support the popular enterprise IT applications such as cloud computing and 3D rendering. However, the ever-increasing traffic burst and the expensive hardware upgrading impose more severe challenges on the design of resource allocation scheme in ODCNs, which is urgently required to take into account not only the service quality but also the economical profit of service provider. In this paper, we focus on the service degradation in ODCNs which has been proven efficient in reducing the denied requests in the overloaded network scenario by releasing portion of the occupied network resources with the acceptable service quality loss. Unlike the service degradation scheme in previous works, we first consider the impact of the time characteristics of service requests on the resource utilization and propose a novel time-dependent load-balancing service degradation (TD-LBSD) framework. In the TD-LBSD framework, a time-dependent link weight scheme is invented for the allocation of bandwidth resource to avoid the potential traffic congestion that is usually induced by the ignored dynamic nature of service requests. In order to support the high-level prepaid requests with unknown holding time, we also propose a statistical method to estimate the residual time of the occupied lightpaths based on the probability distribution of the holding time of past requests. Simulation results verified that our proposed service degradation framework outperforms the previous schemes with the profit gain of 6.2% and the reduced traffic congestions of 16.39% at best.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call