Abstract

In aspect-based sentiment analysis, a fundamental task is extracting aspect terms from opinionated sentences. Aspect term extraction (ATE) has been found to play a critical role among several scenarios, such as service quality improvement and recommendation systems. While deep learning-based methods have achieved great progress in ATE, they mainly consider sequential semantic information and generally ignore the utilisation of syntactic relations of the whole sentence on overall meanings. Furthermore, performances of these methods may also be diminished by poor handling of relation and text noises. To address these issues, we propose a fused sequential and hierarchical representation (FSHR) model, wherein both sequential and hierarchical representations are generated, which facilitates not only the capture of linear semantic information for predicting meaning-related aspect terms but also the utilisation of syntactic relations over the entire sentence to better identify structure-related aspect terms. Moreover, to refine the aspect representation, we incorporate relation-gate mechanism which selectively activates meaningful syntactic dependency paths and design the multi-way aspect attention which prompts the model to focus on relevant text segments about particular aspects. Eventually, sequential and hierarchical representations are adaptively fused for aspect prediction. Experiment results on four datasets demonstrate that FSHR outperforms competitive baselines, and further extensive analyses reveal the effectiveness of our model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call