Abstract

AbstractThe rapid emergence of deep learning (DL) algorithms has paved the way for bringing artificial intelligence (AI) services to end users. The intersection between edge computing and AI has brought an exciting area of research called edge artificial intelligence (Edge AI). Edge AI has enabled a paradigm shift in many application areas such as precision medicine, wearable sensors, intelligent robotics, industry, and agriculture. The training and inference of DL algorithms are migrating from the cloud to the edge. Computationally expensive, memory and power-hungry DL algorithms are optimized to leverage the full potential of Edge AI. Embedding intelligence on the edge devices such as the internet of things (IoT), smartphones, and cyber-physical systems (CPS) can ensure user privacy and data security. Edge AI eliminates the need for cloud transmission through processing near the source of data and significantly reduces the latency; enabling real-time, learned, and automatic decision-making. However, the computing resources at the edge suffer from power and memory constraints. Various compression and optimization techniques have been developed in both the algorithm and the hardware to overcome the resource constraints of edge. In addition, algorithm-hardware codesign has emerged as a crucial element to realize the efficient Edge AI. This chapter focuses on each component of integrating DL into Edge AI such as model compression, algorithm hardware codesign, available edge hardware platforms, and challenges and future opportunities.KeywordsArtificial intelligenceEdge AIMachine learningDeep learningModel compressionAlgorithm-hardware codesign

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call