Abstract

Aspect-based sentiment analysis (ABSA) aims to match sentiment tendencies for different aspects of a sentence to understand the product experience of the user. It is a pressing challenge for existing ABSA methods to synthesize sentences’ semantic relevance and syntactic dependency for more comprehensive sentiment representations. In this paper, we propose a Dual-Channel Relative Position Guided Attention Network (Dual-RPGA). Dual-RPGA deeply learns semantic and syntactic representations of sentiment to provide reliable knowledge for dynamic fusion and prediction of sentiment. First, we design a syntactic graph attention network (Syn-GAT) to learn the syntactic relative position between aspect and context, which guides the sentiment syntactic representation. Then, we build a semantic attention network (Sem-Attention). It computes semantic attention and similarity coefficients for aspects and contexts to enhance sentiment semantic expressions. Finally, we design a fusion network (Bi-Fusion) that realizes dynamic feature interactions of sentiment semantics and syntactics to perform sentiment prediction. We conduct extensive experiments on two groups of datasets to validate the performance of Dual-RPGA on the ABSA task. The results show that Dual-RPGA outperforms the optimal baseline by 0.58%∼1.49% of the Acc score, which verifies that Dual-RPGA performs better on the ABSA task.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.