Abstract

To enhance the quality and speed of data processing and protect the privacy and security of the data, edge computing has been extensively applied to support data-intensive intelligent processing services at edge. Among these data-intensive services, ensemble learning-based services can, in natural, leverage the distributed computation and storage resources at edge devices to achieve efficient data collection, processing, and analysis. Collaborative caching has been applied in edge computing to support services close to the data source, in order to take the limited resources at edge devices to support high-performance ensemble learning solutions. To achieve this goal, we propose an adaptive in-network collaborative caching scheme for ensemble learning at edge. First, an efficient data representation structure is proposed to record cached data among different nodes. In addition, we design a collaboration scheme to facilitate edge nodes to cache valuable data for local ensemble learning, by scheduling local caching according to a summarization of data representations from different edge nodes. Our extensive simulations demonstrate the high performance of the proposed collaborative caching scheme, which significantly reduces the learning latency and the transmission overhead.

Highlights

  • With the breakthrough of artificial intelligence (AI), we are witnessing a booming increase in AI-based applications and services [1]. e existing intelligent applications are computation intensive

  • Neural network models learn relationships among a huge number of training data [4]. This type of complex nonlinear models is sensitive to initial conditions, both in terms of the initial random weights and in terms of the statistical noise in the training data [5]. is nature of the learning algorithm means that different neural network models are trained; it may learn a new group of features from inputs to outputs, which have different performances in practice

  • To differentiate all submodels as much as possible, we take different data to train various submodels and improve the performance of ensemble learning: (1) an efficient way to record the cached data items is highly required, (2) exchange the record of the cached data among different edge nodes, and (3) schedule the edge caching according to the records and train different submodels for high-quality ensemble learning

Read more

Summary

Introduction

With the breakthrough of artificial intelligence (AI), we are witnessing a booming increase in AI-based applications and services [1]. e existing intelligent applications are computation intensive. To process a huge number of data in time at the edge of the network, edge computing has rapidly developed in recently. E rapid uptake of edge computing applications and services pose considerable challenges on networking resources [3]. Fulfilling these challenges is difficult due to the conventional networking infrastructure. Neural network models learn relationships among a huge number of training data [4]. This type of complex nonlinear models is sensitive to initial conditions, both in terms of the initial random weights and in terms of the statistical noise in the training data [5]. This type of complex nonlinear models is sensitive to initial conditions, both in terms of the initial random weights and in terms of the statistical noise in the training data [5]. is nature of the learning algorithm means that different neural network models are trained; it may learn a new group of features from inputs to outputs, which have different performances in practice

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call