Abstract

The derivation of input–output relationships in deep learning architectures is mostly a black-box process, in which uninformative or confounding factors might bias the classification results, with no clue for the user. However, the analysis of living cells requires extracting latent space representations with interpretable meaning, which can be investigated not only for classification but mainly for understanding purposes. Using the properties of variational autoencoders in deriving unsupervised regular space representations, we propose a novel Supervised-Source-Separation Variational Autoencoder (S3-VAE) algorithm capable of guiding the encoding process of images toward a more effective class-separability. The scope has been reached, under the assumption of full-covariance Gaussian posterior, by introducing a term in the cost function that forces class-dependent sources of variations toward a one-hot encoding. The proposed approach has been designed for an automatic platform for single-cell biological investigations based on time-lapse microscopy, in which a dedicated video processing pipeline has been designed, as an additional contribution, to reduce the effects of confounding factors on the S3-VAE representations. The results obtained in a classification scenario with artificially generated phantom cells and with experimental data of human prostate cells, in non-neoplastic, neoplastic and metastatic neoplastic conditions, show that S3-VAE-based latent representations allow for investigation and recognition of the morphological differences among classes. Comparative analysis with other benchmark deep learning architectures demonstrates the effectiveness of the proposed approach and shows that S3-VAE is capable of achieving high performance in discriminating cell lines based on phenotypic variations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.