Abstract

In order to solve the problem of network congestion caused by a large number of data requests generated by intelligent vehicles in LTE-V network, a brand-new fog server with fog computing function is deployed on both the cellular base stations and vehicles, and an LTE-V-fog network is constructed to deal with delay-sensitive service requests in the Internet of vehicles. The weighted total cost combines delay and energy consumption is taken as the optimisation goal. First a reinforcement learning algorithm Q-learning based on Markov decision process is proposed to solve the problem for minimising weighted total cost. Furthermore, this study specifically explains the setting method of three elements for reinforcement learning-state, action and reward in the fog computing system. Then for reducing the scale of problems and improving efficiency, the authors set up a pre-classification process before reinforcement learning to control the possible values of actions. However, considering that as the number of vehicles in system increases, Q-learning method based on recorded Q values may fall into a dimensional disaster. Therefore, the authors propose a deep reinforcement learning method, deep Q-learning network (DQN), which combines deep learning and Q-learning. Experimental results show that the proposed method has advantages.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call