Abstract

Call admission control in access network has become an interesting topic for the research community due to its potential applicability in broadband wireless systems. Admission control problem can be formulated as Markov decision process (MDP) and has proven to deliver optimal policies for blocking and dropping probabilities in wireless networks. This however typically requires the model to know the system dynamics in advance. One approach to solving MDPs considers letting CAC agent interact with the environment and learn by ”trial and error” to choose optimal actions - thus Reinforcement Learning algorithms are applied. Abstraction and generalization techniques can be used with RL algorithms to solve MDPs with large state space. In this paper authors decribe and evaluate a MDP formulated problem to find optimal Call Admission Control policies for WiMAX networks with adaptive modulation and coding. We consider two classes of service (BE and UGS-priority) and a variable capacity channel with constant error bit rate. Hierarchical Reinforcement Learning (HRL) techniques are applied to find optimal policies for multi-task CAC agent. In addition this article validates several neural network training algorithms to deliver a training algorithm suitable for the CAC agent problem.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.