Abstract
ABSTRACT The cache capacity of a single fog access point in a fog wireless access network is limited. To reduce the network load, speed up the cache update, and ensure the privacy protection of user data, a federated reinforcement learning cache strategy combining the advantages of cloud and edge computing is proposed. Through periodic aggregation, the local model updates of all participants are aggregated into the global model. As the optimization cycle increases, the training loss decreases, and the cache hit rate of the research method grow rapidly. The number of uploaded parameters is only 60% of that of federated reinforcement learning, reducing network load and accelerating cache update speed. Batch verification is more efficient than individual verification, and the time cost of privacy protection is lower. The research method improves cache utilization, relieves bandwidth pressure, and enhances data sharing security on the basis of protecting user privacy. With its low time cost and high privacy protection, it can be applied to scenarios where a large amount of user data needs to be processed.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have