Abstract
Federated learning (FL) is a decentralized and privacy-preserving machine learning technique that protects data privacy by learning models locally and not sharing datasets. However, due to limited computing resources on devices and highly heterogeneous data in practical situations, the training efficiency and resource utilization of federated learning is low. In order to resolve these challenges, we introduce a blockchain-assisted dynamic adaptive and personalized federated learning framework (TV-FedAvg) in the presence of restricted computing power resources and data heterogeneity. After each round of local training, we utilize an improved scoring model based on VIKOR and TOPSIS to comprehensively score the devices. The scores are then utilized to choose devices for participation in global aggregation and to carry out model aggregation through blockchain consensus. Furthermore, resources are reallocated for the next round to enhance resource efficiency, model fairness, and performance. Finally, we demonstrate through experimentation that TV-FedAvg outperforms other models such as pFedMe, FedAvg, Per-FedAvg, and TOPSIS in terms of both efficiency and performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.