Abstract

Impressive progress has been recently witnessed on deep unsupervised clustering and feature disentanglement. In this paper, we propose a novel method on top of one recent architecture with a novel explanation of Gaussian mixture model (GMM) membership, accompanied by a GMM loss to enhance the clustering. The GMM loss is optimized with the explicitly computed parameters under our coupled GMM inspired framework. Specifically, our model takes the advantage of implicitly learning a GMM in latent space by neural networks (GMM prior as the first GMM), and explicitly clustering via the other GMM framework (GMM estimator as the second GMM). We further introduce a Dirichlet conjugate loss as a regularization term to prevent the GMM estimator from degenerating to few Gaussians. Eventually, we further propose an application of apparel generation based on the proposed method which requires only three selection steps. Extensive experiments on publicly available datasets demonstrate the effectiveness of our method, in terms of clustering and disentanglement performance.

Highlights

  • It is prevalent to achieve deep image generation in an encoderdecoder network fashion

  • In terms of Street View House Number (SVHN), it can be seen that each row generated by coupled Gaussian mixture VAE (CGMVAE) contains only digits with the same number (Fig. 5(d)), while the images produced by SPLIT-GMVAE can sometimes be mixed with other digits with similar appearance in the same cluster (e.g., digit ‘‘5’’ in the 2nd row, and digit ‘‘2’’ or ‘‘7’’ in the 5th row of Fig. 5(c))

  • We proposed a generative model for deep unsupervised clustering and disentanglement by coupling the Gaussian mixture model (GMM) prior and a GMM estimator

Read more

Summary

INTRODUCTION

It is prevalent to achieve deep image generation in an encoderdecoder network fashion. This term ensures the posterior to be evenly distributed and enables the equal density cluster, intuitively, this term reduces the posterior to be less informative as doing so induces more KL penalty To avoid such an anti-clustering prior and smoothly facilitate data clustering, in this paper, we regard the label prediction as the membership probability for GMM, and adopt a Gaussian mixture loss calculated from an explicit GMM expression to guide clustering. Such an additional loss enables a straightforward clustering procedure in latent space. Extensive experiments and an ablation study show that our proposed method can achieve decent disentangling representation and unsupervised clustering

RELATED WORK
DIRICHLET CONJUGATE LOSS
LOSS FUNCTION The final loss to train our CGMVAE is defined as LMod
EXPERIMENT
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call