Abstract
Target-Oriented Opinion Word Extraction (TOWE) is a challenging information extraction task that aims to find the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">opinion words</i> corresponding to given <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">opinion targets</i> in text. To solve TOWE, it is important to consider the surrounding words of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">opinion words</i> as well as the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">opinion targets</i> . Although most existing works have captured the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">opinion target</i> using Deep Neural Networks (DNNs), they cannot effectively utilize the local context, i.e. relationship among surrounding words of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">opinion words</i> . In this work, we propose a novel and powerful model for TOWE, Gated Relational target-aware Encoder and local context-aware Decoder (GRED), which dynamically leverages the information of the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">opinion target</i> and the local context. Intuitively, the target-aware encoder catches the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">opinion target</i> information, and the local context-aware decoder obtains the local context information from the relationship among surrounding words. Then, GRED employs a gate mechanism to dynamically aggregate the outputs of the encoder and the decoder. In addition, we adopt a pretrained language model Bidirectional and Auto-Regressive Transformer (BART), as the structure of GRED to improve the implicit language knowledge. Extensive experiments on four benchmark datasets show that GRED surpasses all the baseline models and achieves state-of-the- art performance. Furthermore, our in-depth analysis demonstrates that GRED properly leverages the information of the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">opinion target</i> and the local context for extracting the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">opinion words</i> .
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.