Abstract
Data storage optimizations (DS, e.g. low latency for data access) in data center networks(DCN) are difficult online-making problems. Previously, they are done with heuristics under static network models which highly rely on designers' understanding of the environment. Encouraged by recent successes in deep reinforcement learning techniques to solve intricate online assignment problems, we propose to use the Q-learning (QL) technique to train and learn from historical DS decisions, which can significantly reduce the data access delay. However, QL faces two challenges to be widely used in data centers. They are massive input data and the blindness on parameter settings which severely hamper the convergence of the learning process. To solve these two key problems, we develop an evolutionary QL scheme, named as LFDS (Low latency and Fast convergence Data Storage). In the initial stage of the LFDS, the input matrix of QL is sparse to shrink the dimensionality of the massive input data while retaining its information as much as possible. In the following training phase, a specialized neural network is adopted to achieves a quick approximation. To overcome the blindness during QL training, the two key parameters, learning rate, and discount rate are carefully tested with real data input and network architecture. The preferred range of learning rate and discount rate are recommended for the use of QL in data centers, which brings high training rewards and fast convergence. Extensive simulations with real-world data show that the data access latency is decreased by 23.5% and the convergence rate is increased by 15%.
Highlights
With the increased importance of data analysis in the cloud data center networks, more and more service providers in the world rely on data service as part of their core business that affects the performance of that system, such as Amason, Google and Microsoft [1]
4) Based on real data set, extensive simulations results show that LATENCY AND FAST CONVERGENCE DATA ACCESS SCHEME (LFDS) can reduce the average write and read latency by 23.4% while the convergence time is improved by 15%
DESIGN OF THE LOW LATENCY AND FAST CONVERGENCE DATA ACCESS SCHEME (LFDS) LFDS is composed of two parts: (1) the basic Deep Q-Learning scheme (DQL) for dealing with the dynamic environment and data access patterns, (2) the Sparse input matrix method to further reduce the input state scale of DQL
Summary
With the increased importance of data analysis in the cloud data center networks, more and more service providers in the world rely on data service as part of their core business that affects the performance of that system, such as Amason, Google and Microsoft [1]. They have to battle daily with data latency: slow data access rates can reduce their ability to deliver new digital products and services, and harm the profitability, customer relationships, and any operational efficiency. How to take full account of the dynamic factors of data centers to optimize data storage is still an open challenge
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.