Abstract

PurposeThis work aims for a systematic comparison of popular shape and appearance models. Here, two statistical and four deep-learning-based shape and appearance models are compared and evaluated in terms of their expressiveness described by their generalization ability and specificity as well as further properties like input data format, interpretability and latent space distribution and dimension.MethodsClassical shape models and their locality-based extension are considered next to autoencoders, variational autoencoders, diffeomorphic autoencoders and generative adversarial networks. The approaches are evaluated in terms of generalization ability, specificity and likeness depending on the amount of training data. Furthermore, various latent space metrics are presented in order to capture further major characteristics of the models.ResultsThe experimental setup showed that locality statistical shape models yield best results in terms of generalization ability for 2D and 3D shape modeling. However, the deep learning approaches show strongly improved specificity. In the case of simultaneous shape and appearance modeling, the neural networks are able to generate more realistic and diverse appearances. A major drawback of the deep-learning models is, however, their impaired interpretability and ambiguity of the latent space.ConclusionsIt can be concluded that for applications not requiring particularly good specificity, shape modeling can be reliably established with locality-based statistical shape models, especially when it comes to 3D shapes. However, deep learning approaches are more worthwhile in terms of appearance modeling.

Highlights

  • Building representative, generative models that capture shape and appearance variations of anatomical structures commonly observed in a population of subjects, is a classical problem in computational anatomy

  • The resulting ”shape-normalized” images can be used in a similar manner for a principal component analysis (PCA)-based modeling of the intensities sampled on multiple points

  • The (L)Statistical shape models (SSMs) both yield slightly improved generalization ability compared to the deep-learning methods

Read more

Summary

Introduction

Generative models that capture shape and appearance variations of anatomical structures commonly observed in a population of subjects, is a classical problem in computational anatomy. Statistical shape models (SSMs) use the principal component analysis (PCA) on point-wise shape representations to compactly describe the shape variability [4,12] While those models have had great success in the past [12], they come with significant disadvantages as they are only able to represent linear manifolds, rely on point-bypoint correspondences across all training shapes, and do not generalize well to unseen data when only few training samples were available. Some of those shortcomings have been addressed via targeted extensions of the core method. In addition to shape modeling, the PCAbased mechanisms underlying SSMs [4] have been used

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call