Abstract
In urban rail transit systems, moving the existing rail transit service to cloud computing systems can effectively relieve the pressure from data sharing and excessive loads. Allocating computing resources reasonably to guarantee Quality of Service (QoS) of urban rain transit services is crucial. Traditional resource allocation methods are mostly predefined policies. It proves to be difficult for on-demand policies to efficiently utilize the total resources. And it is hard to set an appropriate threshold for each service when applying the threshold-based policy. As one of the autonomous decision-making methods, Reinforcement Learning (RL) has been applied in many fields to solve resource allocation problems. However, a complete urban rail transit cloud resource allocation scenario usually has high dimensions in action and state spaces. In this paper, we utilize Deep Reinforcement Learning (DRL) to allocate resource, since function approximation is usually used to solve the curse of dimensionality. Several urban rail related services are selected as cloud computing users, and the resource allocation among these services is formulated as a Deep Q-Network (DQN). We conduct both the predefined policy and the DQN-based resource allocation policy in a simulated cloud system. Our simulation results show that the DQN-based policy can obtain a better QoS for all selected rail transit services.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.