Abstract

Flow scheduling in Data Center Networks (DCN) is a hot topic as cloud computing and virtualization are becoming the dominant paradigm in the increasing demand of digital services. Within the cost of the DCN, the energy demands associated with the network infrastructure represent an important portion. When flows have temporal restrictions, the scheduling with path selection to reduce the number of active switching devices is a NP-hard problem as proven in the literature. In this paper, an heuristic approach to schedule real-time flows in data-centers is proposed, optimizing the temporal requirements while reducing the energy consumption in the network infrastructure via a proper selection of the paths. The experiments show good performance of the solutions found in relation to exact solution approximations based on an integer linear programming model. The possibility of programming the network switches allows the dynamic schedule of paths of flows under the software-defined network management.

Highlights

  • Actual trends in computer applications development are oriented to the use of servers, data centers and virtualization to cope with the huge demand of digital services

  • Network administrators have been working in the concept of software defined networks (SDNs) to improve the quality of service (QoS) by separating the control plane from the data plane [1]

  • While in traditional networking devices, the administrator has to set up every device in the system, with SDN, the programming is done from a centralized controller for all the devices, and it may be changed dynamically based on the actual demands of the network

Read more

Summary

Introduction

Actual trends in computer applications development are oriented to the use of servers, data centers and virtualization to cope with the huge demand of digital services. Network administrators have been working in the concept of software defined networks (SDNs) to improve the QoS by separating the control plane from the data plane [1] In this kind of network, the control of the network runs on a centralized server and not on individual networking devices, such as switches or routers. With this approach, network administrators are able to manage traffic flows implementing different policies oriented to provide load-balancing among servers, fault-recovery mechanisms when a path is broken, minimize the response time to users and reduce the energy demand of the data centers, among other optimal criteria. A brief discussion of the results is made in Section 7, and in Section 8, conclusions are drawn, and future work lines are described

Related Work
Real-Time Problem Description
Formal Model
Description of the Heuristics
First Phase
Second Phase
Simple
Multistart
Multistart-Adhoc
Third Phase
Experimental Evaluation
20 Iterations
Findings
Discussion
Conclusions and Future Work
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.