Abstract
We generalize the notion of PAC learning to include transfer learning. In our framework, the linkage between the source and the target tasks is a result of having the sample distribution of all classes drawn from the same distribution of distributions, and by restricting all source and a target concepts to belong to the same hypothesis subclass. We have two models: an adversary model and a randomized model. In the adversary model, we show that for binary classification, conventional PAC-learning is equivalent to the new notion of PAC-transfer and to transfer generalization of the VC-dimension. For regression, we show that PAC-transferability may exist even in the absence of PAC-learning. In both adversary and randomized models, we provide PAC-Bayesian and VC-style generalization bounds to transfer learning. In the randomized model, we provide bounds specifically derived for Deep Learning. A wide discussion on the tradeoffs between the different involved parameters in the bounds is provided. We demonstrate both cases in which transfer does not reduce the sample size (‘trivial transfer’) and cases in which the sample size is reduced (‘non-trivial transfer’).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.