Abstract

With the rapid increase of vehicles, the explosive growth of data flow and the increasing shortage of spectrum resources, the performance of existing task offloading scheme is poor, and the on-board terminal can’t achieve efficient computing. Therefore, this article proposes a task offload strategy based on reinforcement learning computing in edge computing architecture of Internet of vehicles. Firstly, the system architecture of Internet of vehicles is designed. The Road Side Unit receives the vehicle data in community and transmits it to Mobile Edge Computing server for data analysis, while the control center collects all vehicle information. Then, the calculation model, communication model, interference model and privacy issues are constructed to ensure the rationality of task offloading in Internet of vehicles. Finally, the user cost function is minimized as objective function, and double-layer deep Q-network in deep reinforcement learning algorithm is used to solve the problem for real-time change of network state caused by user movement. The results show that the proposed offloading strategy can achieve fast convergence. Besides, the impact of user number, vehicle speed and MEC computing power on user cost is the least compared with other offloading schemes. The task offloading rate of our proposed strategy is the highest with better performance, which is more suitable for the scenario of Internet of vehicles.

Highlights

  • With the development of automobile industry and the improvement of economic level, the number of automobiles is increasing

  • The application range of Internet of Vehicles (IoV) has been extensively expanded. Research hotspots such as smart cities and smart transportation are inseparable from driving networking technology, and IoV data has shown explosive growth

  • This puts a lot of pressure on the existing IoV and core networks

Read more

Summary

INTRODUCTION

With the development of automobile industry and the improvement of economic level, the number of automobiles is increasing. Wang et al.: Task Offloading Strategy Based on Reinforcement Learning Computing generated computing needs, such as applications such as real-time road conditions and automatic identification These in-vehicle applications require a lot of computing and storage resources. The existing in-vehicle equipment cannot meet these conditions This brings huge challenges to IoV in terms of computing and communication capabilities [6]. Due to the huge amount of data transmitted, there will be a large transmission delay from vehicle-mounted equipment to core networks. This cannot meet the delay requirements of some in-vehicle services [7]. How to use a reasonable offloading scheme to efficiently complete the computing task is a problem worthy of in-depth study

RELATED WORK
CALCULATION MODEL
COMMUNICATION MODEL
INTERFERENCE MODEL
CONDITIONAL PRIVACY PROTECTION OF VEHICLE IDENTITY
RESOURCE ALLOCATION BASED ON DOUBLE DQN
EXPERIMENT SCHEME AND RESULT ANALYSIS
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.