Abstract
Owing to the oversight regarding training data privacy within the realm of Deep Learning (DL), there have been inadvertent data leaks containing personal information, resulting in consequential impacts on data providers. Consequently, safeguarding data privacy throughout the deep learning process emerges as a paramount concern. In this paper, the author suggests the integration of FedAvg into the training procedure as a measure to ensure data security and privacy. In the experiments, the author first applied data augmentation to equalize the various samples in the dataset, then simulated four users using a Central Processing Unit (CPU) with four cores and established a network architecture starting with DenseNet201. Each user cloned all parameters of global model and received an equal portion of the dataset. After updating the parameters locally, the weights were aggregated by averaging and passed back to the global model. Additionally, the author introduced learning rate annealer to help the model converge better. The experimental results demonstrate that incorporating FedAvg indeed saves training time and achieves excellent performance in skin cancer classification. Despite a slight loss in accuracy, the algorithm can address privacy concerns, making the use of FedAvg highly valuable.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.