Abstract

Transfer learning, as a machine learning approach enhancing model generalization across different domains, has extensive applications in various fields. However, the risk of privacy leakage remains a crucial consideration during the transfer learning process. Differential privacy, with its rigorous mathematical foundation, has been proven to offer consistent and robust privacy protection. This study delves into the logistic regression transfer learning problem supported by differential privacy. In cases where transferable sources are known, we propose a two-step transfer learning algorithm. For scenarios with unknown transferable sources, a non-algorithmic, cross-validation-based transferable source detection method is introduced, to mitigate adverse effects from non-informative sources. The effectiveness of the proposed algorithm is validated through simulations and experiments with real-world data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call