Abstract

Edge computing (EC) has emerged as a paradigm aimed at reducing data transmission latency by bringing computing resources closer to users. However, the limited scale and constrained processing power of EC pose challenges in matching the resource availability of larger cloud networks. Load balancing (LB) algorithms play a crucial role in distributing workload among edge servers and minimizing user latency. This paper presents a novel set of distributed LB algorithms that leverage machine learning techniques to overcome the three limitations of our previous LB algorithm, EVBLB: (i) its reliance on static time intervals for execution, (ii) the need for comprehensive information about all server resources and queued requests for neighbor selection, and (iii) the use of a central coordinator to dispatch incoming user requests over edge servers. To offer increased control, custom configuration, and scalability for LB on edge servers, we propose three efficient algorithms: Q-learning (QL), multi-armed bandit (MAB), and gradient bandit (GB) algorithms. The QL algorithm predicts the subsequent execution time of the EVBLB algorithm by incorporating rewards obtained from previous executions, thereby improving performance across various metrics. The MAB and GB algorithms prioritize near-optimal neighbor node servers while considering dynamic changes in request rate, request size, and edge server resources. Through simulations, we evaluate and compare the algorithms in terms of network throughput, average user response time, and a novel LB metric for workload distribution across edge servers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call