Abstract

Semantic Role Labeling (SRL) is one of the most challenging tasks in Natural Language Processing (NLP). SRL is a task that consists of predicate identification, argument identification and argument classification. In this article we present novel approach for argument classification that is based on deep neural network architecture. Traditional discrete based SRL relies heavily on feature engineering that uses syntactic structures in contrast to deep learning approaches that encode whole sentences without taking into account syntactic features. We present an approach that uses combination of syntactic features and external word representations from FastText. The advantages of using FastText embeddings is the generation of better vector representations for rare words and FastText gives better results for words that are not within the dictionary. These attributes for vector representations give good results for morphologically rich languages. Most of the SRL approaches today are trained on resource rich languages. In this article we present novel neural architecture for SRL that is suitable for resource poor morphology rich languages. Experiments on hr500k corpus shows that our syntax-aware approach shows competitive results for argument classification. We present architecture for argument classification that is based on Bidirectional Long-Short Term Memory (Bi-LSTM) and Conditional Random Field (CRF) decoding for finding optimal sequence. Our approach showed results that are very close to benchmark results with F1 score of 72%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call