Abstract

When an image translation task contains intradomain translations, the untranslated source image will be discriminated as the real by the discriminator. Thus if the network’s nonlinearity is insufficient, the generator can fool the discriminator by producing output that resembles the source image. We propose an activation function termed “adaptive rectified linear unit (ReLU) with structure adaption (SA-AdaReLU)” to enhance the control and nonlinearity of the network in image translation tasks. SA-AdaReLU is composed of two technologies: adaptive ReLU (AdaReLU) and structural adaptive function. The proposed AdaReLU can dynamically change the channel-wise data distribution to better utilize the features in negative regions, which helps to improve the control of the network when inner-domain translation is involved. Meanwhile, the structural adaptive function further enhances the feature selection ability of adaptive instance normalization (AdaIN) and enhances the network’s spatial nonlinearity to manipulate the spatial structure on the feature maps. Extensive experiments have demonstrated the effectiveness of the proposed SA-AdaReLU. In addition, with SA-AdaReLU, fewer layers are required to achieve the same visual effect for building the generator, thus reducing the computational complexity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call