Abstract

Two-photon Ca2+ imaging technology increasingly plays an essential role in neuroscience research. However, the requirement for extensive professional annotation poses a significant challenge to improving the performance of neuron segmentation models. Here, we present NeuroSeg-III, an innovative self-supervised learning approach specifically designed to achieve fast and precise segmentation of neurons in imaging data. This approach consists of two modules: a self-supervised pre-training network and a segmentation network. After pre-training the encoder of the segmentation network via a self-supervised learning method without any annotated data, we only need to fine-tune the segmentation network with a small amount of annotated data. The segmentation network is designed with YOLOv8s, FasterNet, efficient multi-scale attention mechanism (EMA), and bi-directional feature pyramid network (BiFPN), which enhanced the model's segmentation accuracy while reducing the computational cost and parameters. The generalization of our approach was validated across different Ca2+ indicators and scales of imaging data. Significantly, the proposed neuron segmentation approach exhibits exceptional speed and accuracy, surpassing the current state-of-the-art benchmarks when evaluated using a publicly available dataset. The results underscore the effectiveness of NeuroSeg-III, with employing an efficient training strategy tailored for two-photon Ca2+ imaging data and delivering remarkable precision in neuron segmentation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.