Abstract
In the field of natural language processing, the task of generating story endings (SEG) requires not only a deep understanding of the narrative context but also the ability to formulate coherent conclusions. This study delves into the use of crosslingual transfer learning to address the challenges posed by the scarcity of Arabic data in SEG, proposing the utilization of extensive English story corpora as a solution. We evaluated the efficacy of multilingual models, such as mBART, mT5, and mT0, in generating Arabic story endings, assessing their performance in both zero-shot and few-shot scenarios. Despite the linguistic complexities of Arabic and the inherent challenges of crosslingual transfer, our findings demonstrate the potential of these multilingual models to transcend linguistic barriers, significantly contributing to the domain of natural language processing across different languages. This research has significant implications for generating creative text and improving multilingual natural language processing in resource-limited language contexts
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.