Abstract

MotivationCell shape provides both geometry for, and a reflection of, cell function. Numerous methods for describing and modeling cell shape have been described, but previous evaluation of these methods in terms of the accuracy of generative models has been limited.ResultsHere we compare traditional methods and deep autoencoders to build generative models for cell shapes in terms of the accuracy with which shapes can be reconstructed from models. We evaluated the methods on different collections of 2D and 3D cell images, and found that none of the methods gave accurate reconstructions using low dimensional encodings. As expected, much higher accuracies were observed using high dimensional encodings, with outline-based methods significantly outperforming image-based autoencoders. The latter tended to encode all cells as having smooth shapes, even for high dimensions. For complex 3D cell shapes, we developed a significant improvement of a method based on the spherical harmonic transform that performs significantly better than other methods. We obtained similar results for the joint modeling of cell and nuclear shape. Finally, we evaluated the modeling of shape dynamics by interpolation in the shape space. We found that our modified method provided lower deformation energies along linear interpolation paths than other methods. This allows practical shape evolution in high dimensional shape spaces. We conclude that our improved spherical harmonic based methods are preferable for cell and nuclear shape modeling, providing better representations, higher computational efficiency and requiring fewer training images than deep learning methods.Availability and implementationAll software and data is available at http://murphylab.cbd.cmu.edu/software.Supplementary information Supplementary data are available at Bioinformatics online.

Highlights

  • A Gaussian random noise with standard deviation 0.1 are added into the coordinates

  • We randomly sample the proportion of points as control points with distribution U (0.01, 0.20)

  • If there is no branching, the thickness T0 at the end point connecting the cell body is sampled with distribution U (2, 3), and the other end point T1 is sampled as the maximum of U (0.4T0, T0) and T0 − 0.02L + U (0, 0.05), where L is the length of the neurite

Read more

Summary

Detailed simulation process of SNL cells

If there is no branching, the thickness T0 at the end point connecting the cell body is sampled with distribution U (2, 3), and the other end point T1 is sampled as the maximum of U (0.4T0, T0) and T0 − 0.02L + U (0, 0.05), where L is the length of the neurite. We first put two neurites to angle 0 and π relative to the cell body, we allow some small perturbations of the angles with truncated normal distribution N[−0.22,0.22](0, 0.1252) ∗ 2π. Starting from the central slice, image erosion is applied to the current slice, with a disk kernel with a random kernel size. Similar as sampling for upper slices, starting from the central slice, image erosion or dilation is applied to the current slice, with a disk kernel with a random kernel size.

Definition of residual blocks
Network structure for variational autoencoders
Network structure for outline autoencoder
Parameter setting in the training
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call