Detecting the negation and speculation linguistic phenomena is vital for the performance of Arabic Natural Language Processing (ANLP) tasks. The negation and speculation scope detection problems have been addressed in a number of studies where most of them focused on the English and Spanish languages. This is due to the lack of corpora annotated for negation and speculation. In this work, the ArNeg corpus, annotated with negation, is extended by annotating it for the speculation to build the ArNegSpec corpus. In addition, we propose a transformer-based learning approach for detecting both the negation and speculation in Arabic texts. The AraBERT models with a Bidirectional Long Short-Term Memory and a Conditional Random Field (BiLSTM-CRF) as a sequence classification layer to achieve this goal. The results reached an F1 measure of 98% for cue identification for both negation and speculation. The proposed approach enhanced the evaluation results of the negation scope detection by 6% in terms of the F1 measure compared to the previous study. Furthermore, it achieved a 95% F1 measure for the speculation scope detection and a PCS value of 96% for both the negation and speculation scope. This approach shows the feasibility of transformer-based learning models in the sequence classification tasks as the detection of the negation and speculation in Arabic.
Read full abstract