Abstract

The main objective of aspect-based sentiment classification (ABSC) is to predict sentiment polarities of different aspects from sentences or documents. Recent research integrates sentiment terms into pretraining models whose accuracy impacts the ABSC performance. This paper introduces a sentiment knowledge-adaptive pretraining model (ASK-RoBERTa). A sentiment word dictionary is first built from general and field sentiment words. We develop a series of term and sentiment mining rules based on part-of-speech tagging and sentence dependency grammar. These mining rules consider word dependencies, compounding, and conjunctions. The pretraining model optimizes the mining rules to capture the dependency between aspects and sentiment words. Experimental results on multiple public benchmark datasets demonstrate the satisfactory performance of ASK-RoBERTa.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call