Lyric rewriting involves taking the original lyrics of a song and creatively rephrasing them while preserving their core meaning and emotional essence. Sequence-to-sequence methods often face the problem of lack of annotated corpus and difficulty in understanding lyrics when dealing with the lyrics rewriting task. Inspired by the language rewriting technique - grammatical error correction (GEC) and sequence-to-sequence generation technique - neural machine translation (NMT) methods, we propose novel self-supervised learning methods that can effectively solve the problem of the lack of lyrics rewriting corpus. In addition, we also propose a new pre-trained DAE Transformer model with data prior knowledge fusion to enhance the lyrics rewriting ability. The reference-as-context model (RaC-Large) constructed by us based on these two methods achieves the best results in comparison with the baseline including large language models, fully verifying the effectiveness of the new method. We also validate the effectiveness of our approach on GEC and NMT tasks, further demonstrating the potential of our approach on a broad range of sequence-to-sequence tasks.