Abstract

Zero-shot stance detection (ZSSD) is an important research problem that requires algorithms to have good stance detection capability even for unseen targets. In general, stance features can be grouped into two types: target-invariant and target-specific. Target-invariant features express the same stance regardless of the targets they are associated with, and such features are general and transferable. On the contrary, target-specific features will only be directly associated with specific targets. Therefore, it is crucial to effectively mine target-invariant features in texts in ZSSD. In this paper, we develop a method based on contrastive learning to mine certain transferable target-invariant expression features in texts from two dimensions of sentiment and stance and then generalize them to unseen targets. Specifically, we first grouped all texts into several types in terms of two orthogonal dimensions: sentiment polarity and stance polarity. Then we devise a supervised contrastive learning-based strategy to capture each type's common and transferable expressive features. Finally, we fuse the above-mentioned expressive features with the semantic features of the original texts about specific targets to deal with the stance detection for unseen targets. Extensive experiments on three benchmark datasets show that our proposed model achieves the state-of-the-art performance on most datasets. Code and other resources are available on GitHub <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> https://github.com/zoujiaying1995/sscl-project.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call