Abstract
The Zone Routing Protocol of Mobile Ad Hoc Networks is one of the most reliable and efficient routing protocols. However, maintaining Quality of Service, energy-efficiency and optimal resource management is of utmost importance to provide timely and reliable communication services. In this paper, a highly robust and efficient reinforcement learning based Dynamic Power Management (DPM) and Switching control strategy is developed. Unlike classical DPM models, our proposed model employs both system layer information and PHY layer information to perform stochastic prediction to schedule PHY switching. Here, we have applied both known and unknown node/network parameters such as node’s holding period, Bit Error Probability to perform stochastic prediction. Our proposed model intends to maintain minimum BEP and holding period while assuring maximum resource utilization. To achieve it, the overall DPM model is formulated as Controlled Markov decision process, where employing hidden Markov model with Lagrange relaxation and cost function we achieved optimal resource allocation without compromising transmission quality, latency or computational costs. Through simulation-based evaluations, the proposed model outperforms the classical learning models by 50% reduction in PHY Transmission Action, 94% lower cost consumption, 83% decrease in buffer cost/delay and 94% reduction on packet overflow.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.