Abstract

Aspect-based sentiment analysis (ABSA) aims to associate a text with a set of aspects and infer their respective sentimental polarities. State-of-the-art approaches are built on fine-tuning pre-trained language models, focusing on learning aspect-specific representations from the corpus. However, aspects are often expressed implicitly, making implicit mapping challenging without sufficient labeled examples, which may be scarce in real-world scenarios. This paper proposes a unified framework to address aspect categorization and aspect-based sentiment subtasks. We introduce a mechanism to construct an auxiliary-sentence for the implicit aspect using the corpus’s semantic information. We then encourage BERT to learn aspect-specific representation in response to this auxiliary-sentence, not the aspect itself. We evaluate our approach on real benchmark datasets for both ABSA and Targeted-ABSA tasks. Our experiments show that it consistently achieves state-of-the-art performance in aspect categorization and aspect-based sentiment across all datasets, with considerable improvement margins. The BERT-ASC code is available at https://github.com/amurtadha/BERT-ASC.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.