Abstract

Traditional supervised machine learning tests the learned classifiers on data which are drawn from the same distribution as the data used for the learning. In practice, this hypothesis does not always hold and the learned classifier has to be transferred from the space of learning data (also called source data) to the space of test data (also called target data) where it is not directly applicable. To operate this transfer, several methods aim at extracting common structural features in the source and target. Our approach employs a neural model to encode the structure of data: such a model is shown to compress the information in the sense of Kolmogorov theory of information. To perform transfer from source to target, we adapt a result shown for analogy reasoning: the structure of the source and target models are learned by applying the Minimum Description Length Principle which assumes that the chosen transformation has the shortest symbolic description on a universal Turing machine. We encounter a minimization problem over the source and target models. To describe the transfer, we develop a multi-level description of the model transformation which is used directly in the minimization of the description length. Our approach has been tested on toy examples, the difficulty of which can be controlled easily by a one-dimensional parameter and is shown to work efficiently on a wide range of problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call