Automatic text summarization has been a challenging topic in natural language processing (NLP) as it demands preserving important information while summarizing the large text into a summary. Extractive and abstractive text summarization are widely investigated approaches for text summarization. In extractive summarization, the important sentence from the large text is extracted and combined to create a summary whereas abstractive summarization creates a summary that is more focused on meaning, rather than content. Therefore, abstractive summarization gained more attention from researchers in the recent past. However, text summarization is still an untouched topic in the Nepali language. To this end, we proposed an abstractive text summarization for Nepali text. Here, we, first, create a Nepali text dataset by scraping Nepali news from the online news portals. Second, we design a deep learning-based text summarization model based on an encoder-decoder recurrent neural network with attention. More precisely, Long Short-Term Memory (LSTM) cells are used in the encoder and decoder layer. Third, we build nine different models by selecting various hyper-parameters such as the number of hidden layers and the number of nodes. Finally, we report the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) score for each model to evaluate their performance. Among nine different models created by adjusting different numbers of layers and hidden states, the model with a single-layer encoder and 256 hidden states outperformed all other models with F-Score values of 15.74, 3.29, and 15.21 for ROUGE-1 ROUGE-2 and ROUGE-L, respectively.
Read full abstract