Abstract

We propose a novel semi-supervised learning method of Variational AutoEncoder (VAE), which yields a customized latent space through our EXplainable encoder Network (EXoN). The customization involves a manual design of the interpolation and structural constraint, such as proximity, which enhances the interpretability of the latent space. To improve the classification performance, we introduce a new semi-supervised classification method called SCI (Soft-label Consistency Interpolation). Combining the classification loss and the Kullback–Leibler divergence is crucial in constructing an explainable latent space. Additionally, the variability of the generated samples is determined by an active latent subspace, which effectively captures distinctive characteristics. We conduct experiments using the MNIST, SVHN, and CIFAR-10 datasets, and the results demonstrate that our approach yields an explainable latent space while significantly reducing the effort required to analyze representation patterns within the latent space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call