Code generation has become an integral feature of modern IDEs, gathering significant attention. Notable approaches like GitHub Copilot and TabNine have been proposed to tackle this task. However, these tools may shift code writing tasks towards code reviewing, which involves modification from users. Despite the advantages of user feedback, their responses remain transient and lack persistence across interaction sessions. This is attributed to the inherent characteristics of generative AI models, which require explicit re-training for new data integration. Additionally, the non-deterministic and unpredictable nature of AI-powered models limits thorough examination of their unforeseen behaviors. We propose a methodology named One-shot Correction to mitigate these issues in natural language to code translation models with no additional re-training. We utilize decomposition techniques to break down code translation into sub-problems. The final code is constructed using code snippets of each query chunk, extracted from user feedback or selectively generated from a generative model. Our evaluation indicates comparable or improved performance compared to other models. Moreover, the methodology offers straightforward and interpretable approaches, which enable in-depth examination of unexpected results and facilitate insights for potential enhancements. We also illustrate that user feedback can substantially improve code translation models without re-training. Ultimately, we develop a preliminary GUI application to demonstrate the utility of our methodology in simplifying customization and assessment of suggested code for users.