Abstract

A sequence-to-sequence model is used as the foundation for a suggested abstractive Arabic text summarizing system. Our goal is to create a sequence-to-sequence model by utilizing multiple deep artificial neural networks and determining which one performs the best. The encoder and decoder have been developed using several layers of recurrent neural networks, gated recurrent units, recursive neural networks, convolutional neural networks, long short-term memory, and bidirectional long short-term memory. We are re-implementing the fundamental summarization model in this study, which uses the sequence-to-sequence framework. Using a Google Colab Jupiter notebook that runs smoothly, we have constructed these models using the Keras library. The results further demonstrate that one of the key techniques that has led to breakthrough performance with deep neural networks is the use of Gensim for word embeddings over other text representations by abstractive summarization models, along with FastText, a library for efficient learning of word representations and sentence classification.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.