Abstract

Owing to the oversight regarding training data privacy within the realm of Deep Learning (DL), there have been inadvertent data leaks containing personal information, resulting in consequential impacts on data providers. Consequently, safeguarding data privacy throughout the deep learning process emerges as a paramount concern. In this paper, the author suggests the integration of FedAvg into the training procedure as a measure to ensure data security and privacy. In the experiments, the author first applied data augmentation to equalize the various samples in the dataset, then simulated four users using a Central Processing Unit (CPU) with four cores and established a network architecture starting with DenseNet201. Each user cloned all parameters of global model and received an equal portion of the dataset. After updating the parameters locally, the weights were aggregated by averaging and passed back to the global model. Additionally, the author introduced learning rate annealer to help the model converge better. The experimental results demonstrate that incorporating FedAvg indeed saves training time and achieves excellent performance in skin cancer classification. Despite a slight loss in accuracy, the algorithm can address privacy concerns, making the use of FedAvg highly valuable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call