Abstract
Edge servers can cache some delay-sensitive and resource-intensive applications to reduce the delay of user tasks. However, due to the limited resources of edge servers, they cannot cache all services and process all user requests like remote clouds. In order to take advantage of the low latency, we need to make full use of the limited resources of the edge servers and reasonably allocate user requests to edge servers for processing. Meanwhile, in order to maintain the efficient and long-term operation of the server cluster, we should also consider the load balancing of the cluster. How to balance the server load while making users have the best experience is an urgent problem to be solved. Solving the problem faces the challenge of the interaction between service placement and request scheduling, the tradeoff between communication and computation, and the consideration of response time and edge server load. We propose the service placement algorithm based on user visits and our request scheduling algorithm based on the simulated annealing algorithm. We verify the superiority of our algorithms in response time and server load balancing. The experimental results based on the real data sets prove our algorithm can cope with the actual situation and quickly converge to a favorable value.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.