Abstract

The Internet of Vehicles (IoV) is a communication paradigm that connects the vehicles to the Internet for transferring information between the networks. One of the key challenges in IoV is the management of a massive amount of traffic generated from a large number of connected IoT-based vehicles. Network clustering strategies have been proposed to solve the challenges of traffic management in IoV networks. Traditional optimization approaches have been proposed to manage the resources of the network efficiently. However, the nature of next-generation IoV environment is highly dynamic, and the existing optimization technique cannot precisely formulate the dynamic characteristic of IoV networks. Reinforcement learning is a model-free technique where an agent learns from its environment for learning the optimal policies. We propose an experience-driven approach based on an Actor-Critic based Deep Reinforcement learning framework (AC-DRL) for efficiently selecting the cluster head (CH) for managing the resources of the network considering the noisy nature of IoV environment. The agent in the proposed AC-DRL can efficiently approximate and learn the state-action value function of the actor and action function of the critic for selecting the CH considering the dynamic condition of the network.The experimental results show an improvement of 28% and 15% respectively, in terms of satisfying the SLA requirement and 35% and 14% improvement in throughput compared to the static and DQN approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.