Abstract

In recent years, the field of Natural Language Processing (NLP) has undergone a revolution, with text generation playing a key role in this transformation. This shift is not limited to technological areas but has also seamlessly penetrated creative domains, with a prime example being the generation of song lyrics. To be truly effective, generative models, like Generative Pre-trained Transformer (GPT)-2, require fine-tuning as a crucial step. This paper, utilizing the robustness of the widely-referenced Kaggle dataset titled "Song Lyrics", carefully explores the impacts of modulating three key parameters: learning rate, batch size, and sequence length. The dataset presents a compelling narrative that highlights the learning rate as the most influential determinant, directly impacting the quality and coherence of the lyrics generated. While increasing the batch size and extending sequence lengths promise enhanced model performance, it is evident that there is a saturation point beyond which further benefits are limited. Through this exploration, the paper aims to demystify the complex world of model calibration and emphasize the importance of strategic parameter selection in pursuit of lyrical excellence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call