Abstract
Transfer learning is called for when the training and test data do not share the same input distributions (PXS≠PXT) or/and not the same conditional ones (PY|XS≠PY|XT). In the most general case, the input spaces and/or output spaces can be different: XS≠XT and/or YS≠YT. However, most work assume that XS=XT. Furthermore, a common held assumption is that it is necessary that the source hypothesis be good on the source training data and that the “distance” between the source and the target domains be as small as possible in order to get a good (transferred) target hypothesis.This paper revisits the reasons for these beliefs and discusses the relevance of these conditions. An algorithm is presented which can deal with transfer learning problems where XS≠XT, and that furthermore brings a fresh perspective on the role of the source hypothesis (it does not have to be good) and on what is important in the distance between the source and the target domains (translations between them should belong to a limited set). Experiments illustrate the properties of the method and confirm the theoretical analysis.Determining beforehand a relevant source hypothesis remains an open problem, but the vista provided here helps understanding its role.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.