Abstract
This paper introduces a novel neural network architecture for classifying temporal relationships among events in Arabic sentences. Our model integrates a deep learning pipeline that combines multiple techniques. Initially, the Bidirectional Encoder Representations from Transformers (BERT) model is employed to obtain the contextual representation of each word. Furthermore, the model integrates the part-of-speech (POS) representation, the position of events, and the output from a convolutional neural network (CNN) based at the sentence’s head. Collectively, these features capture the complex relationships between words in their context. The architecture also incorporates two sequential Bidirectional Long Short-Term Memory (BiLSTM) layers. These are complemented by an attention mechanism, which assesses the significance of each word in terms of temporal relationship types. Another CNN layer processes the entire sentence, and finally, a fully connected layer with Softmax determines the temporal relation category, drawing on vectors from the BiLSTM layers, the attention mechanism, and the CNN. Experimental results on the Ara-TimeBank corpus reveal that our model achieves an 89% F1-score, outperforming prior work in this domain.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have