We propose the Topology-Preserving Segmentation Network, a deformation-based model that can extract objects in an image while maintaining their topological properties. This network generates segmentation masks that have the same topology as the template mask, even when trained with limited data. The network consists of two components: the Deformation Estimation Network, which produces a deformation map that warps the template mask to enclose the region of interest, and the Beltrami Adjustment Module, which ensures the bijectivity of the deformation map by truncating the associated Beltrami coefficient based on Quasiconformal theories. The proposed network can also be trained in an unsupervised manner, eliminating the need for labeled training data. This is achieved by incorporating an unsupervised segmentation loss. Our experimental results on various image datasets show that TPSN achieves better segmentation accuracy than state-of-the-art models with correct topology. Furthermore, we demonstrate TPSN’s ability to handle multiple object segmentation.
Read full abstract