Abstract

In the field of natural language processing, the task of generating story endings (SEG) requires not only a deep understanding of the narrative context but also the ability to formulate coherent conclusions. This study delves into the use of crosslingual transfer learning to address the challenges posed by the scarcity of Arabic data in SEG, proposing the utilization of extensive English story corpora as a solution. We evaluated the efficacy of multilingual models, such as mBART, mT5, and mT0, in generating Arabic story endings, assessing their performance in both zero-shot and few-shot scenarios. Despite the linguistic complexities of Arabic and the inherent challenges of crosslingual transfer, our findings demonstrate the potential of these multilingual models to transcend linguistic barriers, significantly contributing to the domain of natural language processing across different languages. This research has significant implications for generating creative text and improving multilingual natural language processing in resource-limited language contexts

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call