Abstract

Deploying deep learning models on edge servers can effectively alleviate the pressure on cloud data centers in computing, communication, and energy, etc. In this study, we attempt to extend this deployment scheme from edge servers to mobile terminals to further save spectrum resources, utilize communication resources, and provide intelligent applications. Unfortunately, the learning model experiences a slow convergence speed, and the mobile terminal consumes much energy if the model is only trained on the local devices with weak computing power. Therefore, we deploy deep learning algorithms (both in edge servers and mobile terminals) by considering the delay and energy consumption constraints of terminals in task processing. Specifically, a lightweight federated learning model is first proposed through the cooperative training between edge servers and mobile terminals. The trained model is then migrated to mobile terminals. Finally, image identification is considered to verify the effectiveness and efficiency of our proposed model. The experimental results show that the accuracy of the model reaches 90% using two datasets and three network models. In addition, it further reduces the processing delay and saves energy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call