Abstract

This letter proposes a new scheme that uses Reward Function Learning for Q-learning-based Geographic routing (RFLQGeo) to improve the performance and efficiency of unmanned robotic networks (URNs). High mobility of robotic nodes and changing environments pose challenges for geographic routing protocols; with multiple features simultaneously considered, routing becomes even harder. Q-learning-based geographic routing protocols (QGeo) with preconfigured reward function encumber the learning process and increase network communication overhead. To solve these problems, we design a routing scheme with an inverse reinforcement learning concept to learn the reward function in real time. We evaluate the performance of the RFLQGeo in comparison with other protocols. The results indicate that the RFLQGeo has a strong ability to organize multiple features, improve network performance, and reduce the communication overhead.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call