Abstract

Along with the development of Internet of Vehicles (IoV) and wireless technology, the usage of applications that require low latency, such as autonomous driving and intelligent navigation, is increasing rapidly, and the demand for content is increasing greatly. This paper proposes an edge caching approach for the IoV based on multi-agent deep reinforcement learning (ECSMADRL) so as to resolve the problem of excessive response delay due to the large increase of data traffic in the IoV. The approach jointly considers content distribution and caching in dynamic environments. In other words, each moving vehicle in the IoV can be seen as an agent, and it can make decisions about content caching and content access adaptively according to the changing environment to minimize the delay in the process of content distribution. It is proved by experiments that compared with other methods, the proposed edge caching (EC) approach has better performance in reducing content distribution delay, improving content hit rate and success rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call