Directional And Topological Transformer With Topology Priors For 4D Cellular Image Segmentation
Cellular segmentation is a crucial step in creating cell shape maps and morphological graphs for living embryos from time-lapse 3D fluorescence images (laser confocal). One reliable method for segmenting cell shapes through deep learning networks is to incorporate voxel distance and topology priors to model shapes in topological structures. However, automated and CNN-based segmentation methods often suffer from low signal-to-noise ratios and insufficient training data. Previous works on semantic segmentation have ignored directional distance and topological information. In this paper, we propose a 3D directional and topological transformer named DTTR (Directional distance mapping and Topological learning TRansformer), which uses topology priors to binarization, and demonstrates an effective directional latent space. We use attention calculation on directional distance maps and utilize topological loss and priors, along with an optimized Delaunay-clustering algorithm, to measure voxel predictions in higher dimensional topology space. DTTR outperforms other existing deep learning models and provides a reliable segmented cell instance dataset (22 new living C. elegans embryos) for establishing 4D cellular morphology map.