Abstract

Aspect-based sentiment analysis (ABSA) aims to use interactions between aspect terms and their contexts to predict sentiment polarity for given aspects in sentences. Current mainstream approaches use deep neural networks (DNNs) combined with additional linguistic information to improve performance. DNN-based methods, however, lack explanation and transparency to support predictions, and no existing model completely solves the trade-off between explainability and performance. In contrast, most previous studies explain the relationship between input and output by attribution; however, this approach is insufficient to mine hidden semantics from abstract features. To overcome the aforementioned limitations, we propose a disentangled linguistic graph model (DLGM) to enhance transparency and performance by guiding the signal flow. First, we propose a disentangled linguistic representation learning module that extracts a specific linguistic property via neurons to help capture finer feature representations. To further boost explainability, we propose a supervised disentangling module, in which labeled linguistic data help reduce information redundancy. Finally, a cross-linguistic routing mechanism is introduced into the signal propagation of linguistic chunks to overcome the defect of distilling information in an intralinguistic property. Quantitative and qualitative experiments verify the effectiveness and superiority of the proposed DLGM in sentiment polarity classification and explainability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call