Abstract

Many storytelling generation problems concern the difficulty to model the sequence of sentences. Language models are generally able to assign high scores to well-formed text, especially in the cases of short texts, failing when they try to simulate human textual inference. Although in some cases output text automatically generated sounds as bland, incoherent, repetitive and unrelated to the context, in other cases the process reveals capability to surprise the reader, avoiding to be boring/predictable, even if the generated text satisfies entailment task requirements. The lyric tradition often does not proceed towards a real logical inference, but takes into account alternatives like the unexpectedness, useful for predicting when a narrative story will be perceived as interesting. To achieve a best comprehension of narrative variety, we propose a novel measure based on two components: inference and unexpectedness, whose different weights can modify the opportunity for readers to have different experiences about the functionality of a generated story. We propose a supervised validation treatment, in order to compare the authorial original text, learned by the model, with the generated one.

Highlights

  • To start with, let us look to discuss with what we consider to be an existing disconnection between the study of Artificial Intelligence and the analysis of storytelling

  • The lyric tradition often does not proceed towards a real logical inference, but takes into account alternatives like the unexpectedness, useful for predicting when a narrative story will be perceived as interesting

  • The purpose of our research is to extend the entailment, providing for other linguistic connectives, because the narrative and lyric tradition often does not proceed towards a real logical inference

Read more

Summary

Introduction

Let us look to discuss with what we consider to be an existing disconnection between the study of Artificial Intelligence and the analysis of storytelling. The text generation guidelines try to discover the more probable distributions of words chains, with respect to the global knowledge learned (paradigmatic level), and locally inferred by the previous sentence (syntagmatic level). The authors show this nicely by plotting the probability, a model would give to human text vs what beam search does” (Note 2) In addition to this kind of perplexity, we find case reports in which the fault is more pronounced, and the inference tests show lack of comprehension and common sense, in spite of the fact that the model used is GPT-3, the top current model. It is evident that this spectrum collects a rich variety of sentence relations, some inferred by cause-effect, some not It is different from the simple opposition between entailment and contradiction, like the current evaluation systems insist on measuring. We propose a novel measure of narrative inference, and, more generally, the concatenation of a pair of sentences based on two components: inference and unexpectedness, whose different weights can modify the opportunity for readers to have different experiences about the functionality of a story

Objectives
Methods
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.