Abstract

In medical image datasets, discrete labels are often used to describe a continuous spectrum of conditions, making unsupervised image stratification a challenging task. In this work, we propose VAESim, an architecture for image stratification based on a conditional variational autoencoder. VAESim learns a set of prototypical vectors during training, each associated with a cluster in a continuous latent space. We perform a soft assignment of each data sample to the clusters and reconstruct the sample based on a similarity measure between the sample embedding and the prototypical vectors. To update the prototypical embeddings, we use an exponential moving average of the most similar representations between actual prototypes and samples in the batch size. We test our approach on the MNIST handwritten digit dataset and the PneumoniaMNIST medical benchmark dataset, where we show that our method outperforms baselines in terms of kNN accuracy (up to +15% improvement in performance) and performs at par with classification models trained in a fully supervised way. Our model also outperforms current end-to-end models for unsupervised stratification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call