Abstract
Intracellular organelle networks (IONs) such as the endoplasmic reticulum (ER) network and the mitochondrial (MITO) network serve crucial physiological functions. The morphology of these networks plays a critical role in mediating their functions. Accurate image segmentation is required for analyzing the morphology and topology of these networks for applications such as molecular mechanism analysis and drug target screening. So far, however, progress has been hindered by their structural complexity and density. In this study, we first establish a rigorous performance baseline for accurate segmentation of these organelle networks from fluorescence microscopy images by optimizing a baseline U-Net model. We then develop the multi-resolution encoder (MRE) and the hierarchical fusion loss (Lhf) based on two inductive components, namely low-level features and topological self-similarity, to assist the model in better adapting to the task of segmenting IONs. Empowered by MRE and Lhf, both U-Net and Pyramid Vision Transformer (PVT) outperform competing state-of-the-art models such as U-Net++, HR-Net, nnU-Net, and TransUNet on custom datasets of the ER network and the MITO network, as well as on public datasets of another biological network, the retinal blood vessel network. In addition, integrating MRE and Lhf with models such as HR-Net and TransUNet also enhances their segmentation performance. These experimental results confirm the generalization capability and potential of our approach. Furthermore, accurate segmentation of the ER network enables analysis that provides novel insights into its dynamic morphological and topological properties. Code and data are openly accessible at https://github.com/cbmi-group/MRE.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.