Abstract

In the era of big data, new learning techniques are emerging to solve the difficulties of data collection, storage, scalability, and privacy. To overcome these challenges, we propose a distributed learning system that merges the hybrid edge-cloud split-learning architecture with the semi-supervised learning scheme. The proposed system based on three semisupervised learning algorithms (FixMatch, Virtual Adversarial Training, and MeanTeacher) is compared to the supervised learning scheme and trained on different datasets and data distributions (IID and non-IID) and with a variable number of clients. The new system could efficiently utilize the local unlabeled samples on the client side and gave a performance encouragement that exceeds 30% in most cases even with small percentage of labelled data. Additionally, certain Split-SSL algorithms showed performance that was on par with or occasionally even better than more resource-intensive algorithms, although requiring less processing power and convergence time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call