Abstract

In distributed environments, cloud computing is widely used to manage user requests for resources and services. Resource scheduling is used to handle user requests for resources based on priorities within a given time frame. In today's environment, healthcare systems and management rely on smart devices connected to the internet. These devices deal with the massive amounts of data processed and detected by smart medical sensors without sacrificing performance factors like throughput and latency. This has prompted the requirement for load balancing among the smart operational devices to prevent any insensitivity. Load balancing is used to manage large amounts of data in both a centralised and distributed manner. In this paper, a load balancing framework for deploying resource scheduling for cloud-based healthcare environments is presented. We use reinforcement learning algorithms such as GA, SARSA, and Q-learning for resource scheduling. These algorithms are used to predict the optimal solution to manage load in cloud-based healthcare environments. The proposed mechanism is energy efficient, has a low make span, and consumes less latency time. The proposed mechanism is implemented using MATLAB. The performance of the proposed mechanism is analysed using latency, make-span, and throughput performance metrics. In the proposed mechanism, throughput is high, whereas make span is less than in the existing technique.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call