Abstract

Distributed machine learning systems train neural network models by utilizing the devices and resources in the network. One such system that was recently introduced is split learning. It trains a deep neural network collaboratively between the server and the client without sharing the raw data, ensuring private and secure training. Implementing such systems on the edge devices adds computation and communication overhead, which might not suit many edge devices, especially in IoT systems, where resources are limited. In this paper, we introduce a modified split learning system that includes an autoencoder and an adaptive threshold mechanism. The modified system has less communication and computation overhead compared to the original split learning system. The modified system was deployed on an IoT system and the results proved the advantages of the proposed mechanism. The communication overhead and computation overhead were reduced with negligible performance loss.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call