Abstract

ABSTRACTEdge computing extends computing resources from the data center to the edge of the network to better handle latency‐sensitive tasks. However, with the rise of the Internet of Things, edge devices with limited processing capabilities face difficulties in executing requests with fluctuating request peaks. In order to meet the deadline constraints of latency‐sensitive tasks, a feasible solution is to offload some latency‐sensitive tasks to other nearby edge devices. This article studies the problem of request migration in edge computing systems and minimizes the request deadline violation rate based on actual online arrival patterns, performance interference phenomena, and deadline constraints. Since a request contains multiple services and request migration will lead to changes in server resource competition pressure, we split the problem into three sub‐problems, dividing the request deadline to determine the maximum response time of the service, determining the performance of the service under different resource pressures and the request migration strategies. To this end, we propose two deadline splitting methods, a performance interference model under multi‐resource pressure, and two heuristic request migration strategies. Since this article considers online edge scenarios, the number and type of requests are black boxes. We conduct simulation experiments and find that our method has only one‐third the number of request violations of other methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.