In this paper, we explore the approaches to the problem of cross-domain few-shot classification of sentiment aspects. By cross-domain few-shot, we mean a setting where the model is trained on large data in one domain (for example, hotel reviews) and is intended to perform on another (for example, restaurant reviews) with only a few labelled examples in the target domain. We start with pre-trained monolingual language models. Using the Polish language dataset AspectEmo, we compare model training using standard gradient-based learning to a zero-shot approach and two dedicated few-shot methods: ProtoNet and NNShot. We find both dedicated methods much superior to both gradient learning and zero-shot setup, with a small advantage held by NNShot. Overall, we find few-shot to be a compelling alternative, achieving a surprising amount of performance compared to gradient training on full-size data.
Read full abstract