Abstract

Zero-shot stance detection is both crucial and challenging because it demands detecting the stances of previously unseen targets in the inference stage. Learning transferable target invariant features effectively from training data is crucial for zero-shot stance detection. This paper proposes an adversarial adaptation approach for zero-shot stance detection, which applies an adversarial discriminative domain adaptation network to transfer knowledge efficiently. Specifically, the proposed model applies knowledge distillation to prevent overfitting the destination data and forgetting the learned source knowledge. Moreover, stance contrastive learning is applied to enhance the quality of feature representation for superior generalization, and sentiment information is extracted to assist with stance detection. The experimental results indicate that our model performs competitively on two benchmark datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call