Abstract

Due to the problem of high link load of edge cache and small storage space of edge server, a caching architecture by the collaborative of edge nodes and the cloud server is proposed. The content cache location is designed and optimized, which can be the content provider, cloud server (CS), and edge node (EN). In the proposed system, cloud servers collaborate with edge servers and the performance of content caching can be improved by coordinating caching on the cloud server or caching on the edge server. In this paper, a cloud-edge collaborative caching model based on the greedy algorithm is proposed, which includes the content caching model and collaborative caching model. Network architecture, file popularity estimation, link capacity, and other factors are considered in the model. Correspondingly, a cloud-edge collaborative cache algorithm based on a greedy algorithm is proposed. The related optimization problem is decomposed into the knapsack problem of cache layout in each layer, and then the greedy algorithm is used to solve the knapsack problem of cache placement and cooperative cache proposed in this paper. The affiliation between CS cache and EN caches in the layered architecture is improved and recognized. In the experimental results, the link load is reduced, the cache hit rate is improved by using the proposed method of edge caching, and it also has obvious advantages in the average end-to-end service delay.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.