Abstract

Transfer learning has received much attention recently and has been proven to be effective in a wide range of applications, whereas studies on regression problems are still scarce. In this article, we focus on the transfer learning problem for regression under the situations of conditional shift where the source and target domains share the same marginal distribution while having different conditional probability distributions. We propose a new framework called transfer learning based on fuzzy residual (ResTL) which learns the target model by preserving the distribution properties of the source data in a model-agnostic way. First, we formulate the target model by adding fuzzy residual to a model-agnostic source model and reuse the antecedent parameters of the source fuzzy system. Then two methods for bias computation are provided for different considerations, which refer to two ResTL methods called ResTLLS and ResTLRD. Finally, we conduct a series of experiments both on a toy example and several real-world datasets to verify the effectiveness of the proposed method.

Highlights

  • S YSTEMS based on traditional machine learning techniques often face a major challenge when applied in real-world applications

  • We propose a new framework called transfer learning based on fuzzy residual (ResTL) which learns the target model for regression problems under conditional shift situation by preserving the distribution properties of the source data in a model-agnostic way

  • General Transformation Function (GTF) [42]: GTF is an algorithm-dependent hypothesis transfer learning method, which characterizes the relationship between the source and the target domains by establishing a GTF

Read more

Summary

Introduction

S YSTEMS based on traditional machine learning techniques often face a major challenge when applied in real-world applications. It is expensive or even impossible to collect a large volume of labeled training data [1], [2] To address this challenge of data scarcity, transfer learning, which can enhance the learning ability in a target domain by transferring the information from related domains, has received much attention recently and has been proven to be effective in a wide range of applications [3]. Under this powerful paradigm, various learning methods were proposed according to different assumptions about the relationship between source and target domains, including covariate shift [4], [5]; prior probability shift [6], [7]; sample selection bias [8], [9]; Manuscript received October 9, 2019; revised February 6, 2020; accepted April 8, 2020.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call