Automatic detection and segmentation of biological objects in 2D and 3D image data is central for countless biomedical research questions to be answered. While many existing computational methods are used to reduce manual labeling time, there is still a huge demand for further quality improvements of automated solutions. In the natural image domain, spatial embedding-based instance segmentation methods are known to yield high-quality results, but their utility to biomedical data is largely unexplored. Here we introduce EmbedSeg, an embedding-based instance segmentation method designed to segment instances of desired objects visible in 2D or 3D biomedical image data. We apply our method to four 2D and seven 3D benchmark datasets, showing that we either match or outperform existing state-of-the-art methods. While the 2D datasets and three of the 3D datasets are well known, we have created the required training data for four new 3D datasets, which we make publicly available online. Next to performance, also usability is important for a method to be useful. Hence, EmbedSeg is fully open source (https://github.com/juglab/EmbedSeg), offering (i)tutorial notebooks to train EmbedSeg models and use them to segment object instances in new data, and (ii)a napari plugin that can also be used for training and segmentation without requiring any programming experience. We believe that this renders EmbedSeg accessible to virtually everyone who requires high-quality instance segmentations in 2D or 3D biomedical image data.
Read full abstract