Abstract

In this paper, we explore how to optimize IoT-based resource allocation and scheduling in a cloud platform environment, focusing on improving computing resource utilization and quality of service, while reducing latency and packet loss. a model is adopted, which contains a number of edge servers and randomly generated computational tasks, taking into account the network conditions between the servers and the tasks. an objective function is established, aiming to maximize the computational resource utilization and QoS, and the corresponding constraints are proposed. Simulations are conducted using CloudSim, and the experimental results show that the total number of VoCS increases from 243.63 to 1397.71 when the scheduling demand is increased from 8 to 64, demonstrating the adaptability and efficiency of the algorithm under different demands. In addition, the algorithm is effective in dealing with both small-scale (200 tasks) and large-scale (6000 tasks) tasks. In addition, the algorithm demonstrates low load imbalance and short task completion time when dealing with both small-scale (200 tasks) and large-scale (6000 tasks) task sets, which proves its effectiveness. Ultimately, the scheduling method proposed in this study not only improves resource utilization and quality of service, but also reduces task completion time and cost.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call