Abstract

Previous research focusing on the evaluation of transfer learning algorithms has predominantly used real-world datasets to measure an algorithm's performance. A test with a real-world dataset exposes an algorithm to a single instance of distribution difference between the training (source) and test (target) datasets. These previous works have not measured performance over a wide-range of source and target distribution differences. We propose to use a test framework that creates many source and target datasets from a single base dataset, representing a diverse-range of distribution differences. These datasets will be used as a stress test to measure an algorithm's performance. The stress test process will measure and compare different transfer learning algorithms and traditional learning algorithms. The unique contributions of this paper, with respect to transfer learning, are defining a test framework, defining multiple distortion profiles, defining a stress test suite, and the evaluation and comparison of different transfer learning and traditional machine learning algorithms over a wide-range of distributions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call