Abstract
The routing algorithm is one of the main factors that directly impact on network performance. However, conventional routing algorithms do not consider the network data history, for instances, overloaded paths or equipment faults. It is expected that routing algorithms based on machine learning present advantages using that network data. Nevertheless, in a routing algorithm based on reinforcement learning (RL) technique, additional control message headers could be required. In this context, this research presents an enhanced routing protocol based on RL, named e-RLRP, in which the overhead is reduced. Specifically, a dynamic adjustment in the Hello message interval is implemented to compensate the overhead generated by the use of RL. Different network scenarios with variable number of nodes, routes, traffic flows and degree of mobility are implemented, in which network parameters, such as packet loss, delay, throughput and overhead are obtained. Additionally, a Voice-over-IP (VoIP) communication scenario is implemented, in which the E-model algorithm is used to predict the communication quality. For performance comparison, the OLSR, BATMAN and RLRP protocols are used. Experimental results show that the e-RLRP reduces network overhead compared to RLRP, and overcomes in most cases all of these protocols, considering both network parameters and VoIP quality.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.