A class of neural networks known as Recurrent Neural Networks (RNNs) are capable of processing sequential input, including time series and plain language. In the shortest possible time to find relevant and useful information, it is for sure very helpful if the information is summarized, but it typically requires a lot of effort, dedication, patience, and attention to detail for humans to go through and summarize the lengthy texts. This can be done using automated abstractive text summarization techniques by using deep learning which is the way of selecting the most significant information in a text, then condensing it while maintaining its underlying meaning. The objective is to construct an abstractive text summarizer using deep learning. It takes large meaning-full text data as input and gives the summary of the data as output. The algorithm which is been used here is the Long Short Term Memory model (LSTM) which is a type of RNN model. And then next the model which is been used is the Sequence to Sequence model (Seq2Seq). Seq2Seq learning is the training model that can modify the sequences of one input into the sequences of another output. The dataset used is News Summary dataset which is taken from Kaggle. Experimental results on the news summary dataset show that the proposed method results are appropriate and match with the other methods.
Read full abstract