Abstract

Aspect-Level Sentiment Classification (ALSC) aims to assign specific sentiments to a sentence toward different aspects, which is one of the crucial challenges in the field of Natural Language Processing (NLP). Despite numerous approaches being proposed and obtaining prominent results, the majority of them focus on leveraging the relationships between the aspect and opinion words in a single instance while ignoring correlations with other instances, which will make models inevitably become trapped in local optima due to the absence of a global viewpoint. Instance representation derived from a single instance, on the one hand, the contained information is insufficient due to the lack of descriptions from other perspectives; on the other hand, its stored knowledge is redundant since the inability to filter extraneous content. To obtain a polished instance representation, we developed a Retrieval Contrastive Learning (RCL) framework to subtly extract intrinsic knowledge across instances. RCL consists of two modules: (a) obtaining retrieval instances by sparse retriever and dense retriever, and (b) extracting and learning the knowledge of the retrieval instances by using Contrastive Learning (CL). To demonstrate the superiority of RCL, five ALSC models are employed to conduct comprehensive experiments on three widely-known benchmarks. Compared with the baselines, ALSC models achieve substantial improvements when trained with RCL. Especially, ABSA-DeBERTa with RCL obtains new state-of-the-art results, which outperform the advanced methods by 0.92%, 0.23%, and 0.47% in terms of Macro F1 gains on Laptops, Restaurants, and Twitter, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call