Abstract

The paper presents a primary study of the latent space structure of neural networks trained for semantic segmentation. Segmentation was performed in a controlled environment of three classes of colored rectangular shapes. The classic autoencoder and U-net like architectures were chosen as reference architectures. To study the structure of the space, a combination of a perceptron that linearly separates classes and the compression algorithms UMAP and PCA was used. As a result, a tool was obtained for evaluating the quality of a neural network based on the degree of separability of classes in the latent space of the network.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call