Abstract
As a reaction and complement to cloud computing, edge computing is a computing paradigm designed for low-latency computing. Edge servers, deployed at the boundary of the Internet, bridge those distributed end devices and the centralized cloud server, forming a harmonic architecture with low latency and balanced loadings. Elaborated task scheduling, including task assignment and processor dispatching, is essential to the success of edge computing systems in dense small cell networks. Plenty of issues need to be considered, such as servers’ computing power, storage capacity, loadings, bandwidth and tasks’ sizes, delays, partitionability, etc. This study contributes to the task scheduling for multicore edge computing environments. We first show that this scheduling problem is an NP -hard problem. An efficient and effective heuristic is then proposed to tackle the problem. Our Multicore Task assignment for maximum Rewards (MAR) scheme differs from most previous schemes in jointly considering all three critical factors: namely task partitionability, multicore, and task properties. A task’s priority is decided by its cost function, which takes into account the task’s size, deadline, partitionability, cores’ loadings, processing power, and so forth. First, tasks from end devices are assigned to edge servers considering servers’ loadings and storage. Next, tasks are assigned to the cores of the selected server. Simulations compare the proposed scheme with First-Come-First-Serve (FCFS), Shortest Task First (STF), Delay Priority Scheduling (DPS), and Green Greedy Algorithm (GGA). Simulations demonstrate that the task completion ratio can be significantly increased, and the number of aborted tasks can be greatly reduced. Compared with FCFS (First-Come-First-Serve), STF (Shortest Task First), DPS (Delay Priority Scheduling), and GGA (Green Greedy Algorithm), the improvement in task completion ratio for hotspots is up to 26%, 25%, 22%, and 9%, respectively.
Highlights
As applications evolve, computing paradigms shift towards three directions, centralized, distributed, and edge computing
In between the two extremes, the rise of the Internet of Things (IoT) [2] inspires the emerging of a new computing paradigm called edge computing [3]
Upon the completion of a task, edge servers return the results to the end device and pass the depurated information to the cloud server
Summary
As applications evolve, computing paradigms shift towards three directions, centralized, distributed, and edge computing. In an edge computing environment, edge servers are deployed in the proximity of the access networks. In general, inferior in their computing power and storage capacity End devices offload their computing tasks to edge servers rather than the cloud server as their first choice. Upon the completion of a task, edge servers return the results to the end device and pass the depurated information to the cloud server. According to the white paper by ETSI, edge computing distinguishes itself with the following characteristics: onpremises, proximity, lower latency, location awareness, and network context information. Tasks from end devices are assigned to edge servers factoring in servers’ loadings and storage.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.