Abstract

With the emergence of a large-scale pre-training model based on the transformer model, the effect of all-natural language processing tasks has been pushed to a new level. However, due to the high complexity of the transformer's self-attention mechanism, these models have poor processing ability for long text. Aiming at solving this problem, a long text processing method named HBert based on Bert and hierarchical attention neural network is proposed. Firstly, the long text is divided into multiple sentences whose vectors are obtained through the word encoder composed of Bert and the word attention layer. And the article vector is obtained through the sentence encoder that is composed of transformer and sentence attention. Then the article vector is used to complete the subsequent tasks. The experimental results show that the proposed HBert method achieves good results in text classification and QA tasks. The F1 value is 95.7% in longer text classification tasks and 75.2% in QA tasks, which are better than the state-of-the-art model longformer.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call