Abstract

An inflection point in the computing industry is occurring with the implementation of the Internet of Things and 5G communications, which has pushed centralized cloud computing toward edge computing resulting in a paradigm shift in computing. The purpose of edge computing is to provide computing, network control, and storage to the network edge to accommodate computationally intense and latency-critical applications at resource-limited endpoints. Edge computing allows edge devices to offload their overflowing computing tasks to edge servers. This procedure may completely exploit the edge server’s computational and storage capabilities and efficiently execute computing operations. However, transferring all the overflowing computing tasks to an edge server leads to long processing delays and surprisingly high energy consumption for numerous computing tasks. Aside from this, unused edge devices and powerful cloud centers may lead to resource waste. Thus, hiring a collaborative scheduling approach based on task properties, optimization targets, and system status with edge servers, cloud centers, and edge devices is critical for the successful operation of edge computing. This paper briefly summarizes the edge computing architecture for information and task processing. Meanwhile, the collaborative scheduling scenarios are examined. Resource scheduling techniques are then discussed and compared based on four collaboration modes. As part of our survey, we present a thorough overview of the various task offloading schemes proposed by researchers for edge computing. Additionally, according to the literature surveyed, we briefly looked at the fairness and load balancing indicators in scheduling. Finally, edge computing resource scheduling issues, challenges, and future directions have discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call