Abstract

Federated learning (FL) in Internet of Things (IoT) systems enables distributed model training using a large corpus of decentralized training data dispersed among multiple IoT clients. In this distributed setting, system and statistical heterogeneity, in the form of highly imbalanced, and nonindependent and identically distributed (non-i.i.d.) data stored on multiple devices, are likely to hinder model training. Existing methods aggregate models disregarding the internal representations being learned, which yet play an essential role to solve the pursued task, especially in the case of deep learning modules. To leverage feature representations in an FL framework, we introduce a method, called FedMargin, which computes client deviations using margins over feature representations learned on distributed data, and applies them to drive federated optimization via an attention mechanism. Local and aggregated margins are jointly exploited, taking into account local representation shift and representation discrepancy with the global model. In addition, we propose three methods to analyse statistical properties of feature representations learned in FL, in order to elucidate the relationship between accuracy, margins, and feature discrepancy of FL models. In experimental analyses, FedMargin demonstrates state-of-the-art accuracy and convergence rate across image classification and semantic segmentation benchmarks by enabling maximum margin training of FL models. Moreover, FedMargin reduces the uncertainty of predictions of FL models compared to the baseline. In this work, we also evaluate FL models on dense prediction tasks, such as semantic segmentation, proving the versatility of the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call