Abstract
Neural networks are used to learn task-oriented high-level representations in an end-to-end manner by building a multi-layer neural network. Generation models have developed rapidly with the emergence of deep neural networks. But it still has problems with the insufficient authenticity of generated images, the deficiency of diversity, consistency, and unexplainability in the generation process. Disentangled representation is an effective method to learn a high-level feature representation and realize the interpretability of deep neural networks. We propose a general disentangled representation learning network with variational autoencoder network as the basic framework for the image generation process. The graph-based structure of the priors is embedded in the last module of the deep encoder network to build the feature spaces by the class, task-oriented, and task-unrelated information respectively. Meanwhile the priors should be adaptively modified with the task relevance of a generated image. And the semi-supervised learning is further involved in the disentangled representation network framework to reduce the requirements of label and extend the majority of feature space under the task-unrelated feature assumption. Experimental results show that the proposed method is efficient for various types of images and has a good potential for further research and development.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.