Abstract
In natural language understanding, Semantic Role Labeling (SRL) is considered as one of the important tasks and widely studied by the research community. State-of-the-art lexical resources have been in existence for defining the semantic role arguments with respect to the predicates. However, such lexical resources are complex in nature which is difficult to understand. Therefore, instead of the classical semantic role arguments, we adopted the concept of 5W1H (Who, What, When, Where, Why and How) for SRL. The 5W1H concept is widely used in journalism and it is much simpler and easier to understand as compared to the classical SRL lexical resources. In the recent years, recurrent neural networks (RNN) based end-to-end SRL systems have gained significant attention. However, all recent works have been developed for formal texts. This paper reports on the implementation of a deep neural network using the attention mechanism for extracting the 5W1H from tweets. Our implementation reports an F-1score of 88.21 which outperforms other recent Twitter SRL system by 28.72.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.