Abstract
Visual storytelling task aims to generate relevant and coherent story for an ordered stream of images. Although visual storytelling methods have made promising improvement in recent years, existing methods pay little attention to the association ability and divergent thinking of the model, which are essential for humanistic stories. This paper introduces a novel Associative Learning Network for Coherent Visual Storytelling to explore the model’s association ability while telling a new story. Specifically, we first build a graph based on the pointwise mutual information and learn association degree of word pairs with Graph Convolutional Network. Besides, an auxiliary hierarchical decoder is designed to combine the words together to generate coherent story. In this way, our model can recall information using associative mem-ory, enhancing the coherence and informativeness of the generated story. Extensive experiments on VIST dataset demonstrate that the proposed framework substantially outperforms the state-of-the-art methods across multiple evaluation metrics.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.