Abstract

Although interpretable methods for deep learning models have become popular in sentiment analysis domains in recent years, existing methods still face the challenge of providing predictions with both high accuracy and user-friendly explanations. To address this problem, we propose a novel framework called Contrasting Logical Knowledge Learning (CLK) that utilizes contrastive learning, label knowledge, and logical rule learning. Logical rule learning is used to provide human-understandable explanations while label knowledge and contrastive learning are used to achieve high performance on both pre-trained models and ordinary DNNs. To ensure model interpretability, we design a novel knowledge reasoning strategy based on learned logical rules and trained models. Empirical results from binary sentiment analysis tasks and fine-grained sentiment analysis tasks show that CLK can effectively balance accuracy and interpretability. Additionally, we conduct two case studies to demonstrate the process of explanation generation and knowledge reasoning, which shows that our method’s explanations are causally consistent with the implicit model decision logic.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call