Abstract

Recently, the high effectiveness of cascade binary framework has been proved in overlapping relation extraction. However, few researchers are applying this framework to Chinese samples and the position encoding in its encoder is not well adapted to the Chinese input sequence. This paper which aims to promote the study of overlapping relationships for Chinese proposes a new positional encoding method for Transformer and constructs a new model called SaTRE (Span-adaptive Transformer for the Cascade Relation triple Extraction). First, we take the character or word segmentation results of the input sequence as a span and concatenate all the spans into a new sequence. Second, we find that the absolute relation of span(position)-to-position(span) and the relativity between positions are equally essential for entity span extraction in the Chinese sequence. Hence in the self-attention module of the encoder, SaTRE decouples attention, representing span(position)-to-position(span) correlation with absolute head-tail positional encoding and positional correlation with span-adaptive relative positional encoding. Third, we investigate whether the original periodic coefficient of Sinusoidal Function positional encoding certainly has better effectiveness. In conclusion, the effectiveness of the proposed method has been proved through extensive experiments and ablation studies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call