Abstract

The growing size of the network imposes computational overhead during network route establishment using conventional approaches of the routing protocol. The alternate approach in contrast to the route table updating mechanism is the rule-based method, but this also provides a limited scope in the dynamic networks. Therefore, reinforcement learning promises a better way of finding the route, but it requires an evaluation platform to build a model synchronization between route and agent. Unfortunately, the de-facto platform for agent evaluation, namely Open-AI Gym, does not provide a suitable networking environment. Therefore, this paper aims to propose a networking environment as a novel contribution by designing a suitable customized environment for a network synchronically with Open-AI Gym. The successful deployment of the proposed network environment: NetAI-Gym provides a functional and practical result that can be used further to develop routing mechanisms based on Q-learning. The validation of the proposed NetAI-Gym is carried out with different nodes in the network regarding Episodes Vs. Reward. The experimental outcome justifies the validity of the proposed NetAI-Gym that it is suitable for solving network-related problems.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.