Abstract

The difficulty of accurately summarising Assamese text content is a significant barrier in natural language processing (NLP). Manually summarising lengthy Assamese texts is time-consuming and labor-intensive. As a result, automatic text summarization has developed as a critical NLP study topic. In this study, we integrate the Transformer and Self-Attention approaches to develop an abstract text summarization model. This Transformer-based technique uses self-attention approaches to successfully manage co-reference concerns in Assamese text, enhancing overall system understanding. This proposed approach improves the efficiency of text summarization greatly. We exhaustively evaluated the model using the Assamese dataset (AD-50), which contains human-produced summaries, to assess its performance. When compared to current state-of-the-art baseline models, our model outperformed them. On the AD-50 dataset, for example, our suggested model obtained a low training loss of 0.0022 during 20 training epochs, as well as an amazing model accuracy of 47.15 percentage. This research marks a substantial advancement in the field of Assamese abstractive text summarization, with intriguing implications for practical applications in NLP.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.