Multi-target stance detection is the detection of the stance of multiple targets in text. Currently, most multi-target stance detection methods only detect the stance of two targets individually and do not make the two targets complement each other to take full advantage of the relevant semantic information between the two targets. In this paper, we propose a comparative learning based stance agreement detection framework. We applied contrastive learning to stance agreement detection, it enabled the model to learn more information about the features of the target and to strengthen the links between the semantic information of the targets so that they assist each other in stance detection. In addition, we fine-tuned a new model as our encoder to more fully exploit the semantic information between hidden contexts. We also apply joint training as a multi-task learning approach, allowing models to share domain-specific information based on the dataset. By comparing different methods, experimental results show that our method achieves state-of-the-art results on multi-target benchmark datasets. In the concluding sections of our paper, we conducted error analysis experiments on the proposed methodology, elucidating its inherent limitations and furnishing invaluable insights conducive to future enhancements.
Read full abstract