Abstract
Traditional guided image translation methods, based on encoder–decoder or U-Net structures, often struggle with complex or contrasting images. To address this, we introduce a novel dual-stage strategy. First, we use a cascaded cross-gating MLP-Mixer to merge image and semantic guidance codes, generating intermediate results influenced by these cues. Second, we implement a refined pixel-level loss function to handle semantic guidance noise, along with a new cross-attention gating mechanism for detail refinement. Additionally, our framework utilizes an MLP-Mixer-based discriminator, ensuring that the entire system is built on the MLP-Mixer architecture. Our results in cross-view image translation and person image synthesis outperform current benchmarks, demonstrating the effectiveness of our method.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have