Abstract
As the core infrastructure of cloud computing, the datacenter networks place heavy demands on efficient storage and management of massive data. Data placement strategy, which decides how to assign data to nodes for storage, has a significant impact on the performance of the datacenter. However, most of the existing solutions cannot be better adaptive to the dynamics of the network. Moreover, they focus on where to store the data (i.e., the selection of storage node) but have not considered how to store them (i.e., the selection of routing path). Since reinforcement learning (RL) has been developed as a promising solution to address dynamic network issues, in this paper, we integrate RL into the datacenter networks to deal with the data placement issue. Considering the dynamics of resources, we propose a Q-learning based data placement strategy for datacenter networks. By leveraging Q-learning, each node can adaptively select next-hop based on the network information collected from downstream, and forward the data toward the storage node that has adequate capacity along the path with high available bandwidth. We evaluate our proposal on the NS-3 simulator in terms of average delay, throughput, and load balance. Simulation results show that the Q-learning placement strategy can effectively reduce network delay and increase average throughout while achieving load-balanced among servers.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.