Our work introduces a conditional reiteration mechanism for High-Fidelity GAN (Generative Adversarial Networks) inversion (HFGI), preserving image-specific details (like background, appearance, etc.) for both normal and out-of-domain images (e.g. heavy makeup faces). The HFGI encoder’s single-stage conditional latent maps result in blurry regions in restored images and loss of detailed information during editing. To address this, we proposed a reiterative conditional latents method that restores image-specific details sharply. The process involves two stages of iterations, reconstructing the image in the first stage, and refining image-specific details using conditional latent codes in the second stage. Our model successfully inverts out-of-domain images while preserving all details and supports InterfaceGAN, GANspace, and StyleClip for editing. We compare our approach with state-of-the-art GAN inversion methods on FFHQ (Flickr-Faces-HQ) Dataset, demonstrating significant improvements in inversion and editing quality.