Abstract

In the near future, vehicular networks are expected to provide and consume a variety of services for autonomous driving, connected car, and Internet of Things (IoT). For practical service scenarios, it is necessary to consider the characteristics of the dynamic environment and Quality of Services (QoS) in a vehicular network. The goal of this paper is to maximize service delivery ratio while meeting QoS factors. We present three issues to be addressed by a road side unit (RSU) acting as a fog server. The first issue is the scheduling of services with different effective time. The second is the RSU cache replacement strategy considering limited storage space. The third is the QoS-based message collision control for channels that multiple vehicles share. This paper solves these three issues by leveraging Deep Q Network (DQN), one of deep reinforcement learning techniques. To this end, the three problems are defined as Markov Decision Process (MDP) problems and the effectiveness of the proposed method is demonstrated through experiments. Experimental results substantiate that the proposed method based on DQN can find a policy that is adaptive to situations through learning for each defined problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call