Abstract

Automatic text summarization is utilized to address the sustained increasing proportion of text data available in online, to address such numerous data to short data summarization is preferred. In this work, abstractive text summarization is reviewed with Recurrent Neural Network based Long Short Term Memory with encoder and attention decoder. In recent years, this type of summarization has carried out with Recurrent Neural Network which learns from the previous time steps. In this paper, we propose Teacher Forcing Technique to improve the slow convergence and poor performance of Recurrent Neural Network. Our technique improves the performance of text summarization by using Attention Decoder which learns from ground truth instead of learning from previous time steps and minimizes the error rate with Stochastic Gradient Descent optimizer with more accurate results on Wikihow dataset when measured with metric of ROUGE and performed well when compared our result with the state of art results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call