Abstract
In this chapter, we give a comprehensive description of different generalization bounds for domain adaptation that are based on divergence measures between source and target probability distributions. Before we proceed to the formal presentation of the mathematical results, we first explain why this particular type of result presents the vast majority of all available domain adaptation results. We further take a closer look at the very first steps that were established in order to provide a theoretical background for domain adaptation. Surprisingly, these first results reflected in a very direct way the general intuition behind domain adaptation and remained quite lose to the traditional generalization inequalities presented in Chapter 1. After this, we turn our attention to different strategies that were proposed in order to overcome the flaws of the seminal results as well as to strengthen the obtained theoretical guarantees.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have