Abstract

Edge computing has emerged as a promising solution to reduce communication delays and enhance the performance of real-time applications. However, due to the limited processing power of edge servers, it is challenging to achieve efficient task processing. In this paper, we propose a Q-learning-based load-balancing method that optimizes the distribution of real-time tasks between edge servers and cloud servers to reduce processing time. The proposed method is dynamic and adaptive, taking into account the constantly changing network status and server usage. To evaluate the effectiveness of the proposed load-balancing method, extensive simulations are conducted in an Edge-Cloud network environment. The simulation results demonstrate that the proposed method significantly reduces processing time compared to traditional static load-balancing methods. The Q-learning algorithm enables the load-balancing system to dynamically learn the optimal decision-making strategy to allocate tasks to the most appropriate server. Overall, the proposed Q-learning-based load-balancing method provides a dynamic and efficient solution to balance the workload between edge servers and cloud servers. The proposed method effectively achieves real-time task processing in edge computing environments and can contribute to the development of high-performance edge computing systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.