Abstract
Internet of things (IoT) has emerged as the enabling technology for smart applications in different domains, such as transportation, health-care, industry, smart homes and buildings, and education (e.g., [1-5]). IoT applications rely on the deployment of resourceconstrained devices that collect data from the environment it is immersed and control events of interest through actuators. One of the daunting challenges in many IoT applications is the need for the real-time processing of a large amount of produced data. Such processing is often impractical to be performed at the IoT devices, due to their resource-constrained nature and the incurred energy cost. In this regard, IoT data is often offloaded to be processed on distant powerful cloud servers, which return to IoT devices as the result of the heavy computations. This approach is well-suited for computation-intensive tasks in IoT applications. However, the process of task offloading to cloud servers incurs additional delays for the IoT application, in addition to the network overhead. Therefore, edge computing has been proposed to provide computation, communication, and storage resources closer to IoT devices. The general idea is to place resources in the proximity of IoT devices that will demand them. Thus, the latency involved in the IoT application is reduced since computation-intensive tasks are processed on edge devices rather than on distant cloud servers. One of the critical challenges in edge-aided IoT applications is that edge devices have limited resource capabilities when compared to cloud servers. In this regard, edge devices' resources must be managed and allocated in an efficient way, aimed at providing resources to IoT applications with guaranteed quality of service (QoS). This tutorial will motivate and explore the challenges, design principles, and goals of edge computing for IoT applications. It presents the building blocks for the design of optimization models for IoT task offloading to edge nodes. By doing so, it discusses the communication challenges between IoT and edge devices and highlights the different mathematical formulations commonly used in the literature to model IoT to edge communication. Furthermore, this tutorial discusses optimization-based and machine learning (ML)-based solutions for tackling the task offloading decision problem. Besides, this tutorial presents recent advancements in resource management solutions aimed at efficient resource allocation at edge devices. Finally, this tutorial shall conclude with a discussion of research opportunities and challenges in the edge-assisted Internet of things.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.