FedAWA: Aggregation Weight Adjustment in Federated Domain Generalization

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Domain Generalization (DG) in the setting of federated learning (i.e. Federated Domain Generalization, FDG) is gaining increasing attention. FDG aims to learn a global model generalizing well to clients with unseen domains in a privacy-preserving manner. Existing works mainly focus on how to learn invariant features or transmit data distributions in a privacy-preserving manner, the correctness of aggregation weights has not been extensively discussed. In this paper, we discuss the rationality of aggregation weights and propose a novel method in FDG, namely FedAWA. We replace traditional weighted aggregation based on data volume with arithmetic mean. Then we propose confidence adjustment to adjust aggregation weights to ensure the rationality of aggregation weights due to the uncertainty of local model training. Moreover, we propose fairness adjustment to adjust local training epochs to prevent local models from contributing excessively or insufficiently to the global model. Our method can serve as a plug-and-play plugina. Experiments on several benchmarks have shown the effectiveness of our method, our method enhances the generalization improvement of federated learning algorithms.

Save Icon
Up Arrow
Open/Close
Notes

Save Important notes in documents

Highlight text to save as a note, or write notes directly

You can also access these Documents in Paperpal, our AI writing tool

Powered by our AI Writing Assistant