Articles published on Automatic segmentation
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
10519 Search results
Sort by Recency
- New
- Research Article
- 10.1007/s10278-026-01846-x
- Feb 12, 2026
- Journal of imaging informatics in medicine
- Alberto Guzzi + 4 more
While several openly available tools for the automatic segmentation of the anatomical cross-sectional area (ACSA) of muscle exist, there is no open-source and peer-reviewed tool for the patellar tendon. In this study, we tested an automatic approach for the segmentation of the patellar tendon ACSA in ultrasound images. Images were acquired at 25%, 50%, and 75% of the patellar tendon length from 30 participants (age 46.87 ± 6.03years; BMI 25.45 ± 4.14kg/m2). To assess measurement consistency, we evaluated intra-rater and inter-session reliability using manual segmentation of the ACSA. Additionally, we trained three neural networks with a dataset of 497 images to compare manual with automatic segmentation. Intra-rater reliability was found to be good with intraclass correlation coefficient (ICC) of 0.804 (95% CI 0.628-0.902), standard error of measurement (SEM) equal to 0.05 cm2 (0.03-0.07), and mean absolute error (MAE) of 0.05 cm2 (0.04-0.07), while inter-session reliability was excellent with ICC of 0.980 (0.970-0.987), SEM equal to 0.02 cm2 (0.02-0.02), and MAE of 0.02 cm2 (0.01-0.02). Regarding the comparability with manual analysis after removal of erroneous predictions, ICC was equal to 0.848 (0.702-0.914), SEM was 0.05 (0.04-0.07), and MAE was 0.05 cm2 (0.05-0.06) with a small standardized mean difference of 0.53 (0.33-0.75). When applying the model, analysis times per image ranged between 0.302 and 0.414s. The proposed approach enables fast and less operator-dependent patellar tendon ACSA analysis. Although some differences were observed between manual and automatic analysis, this tool, if applied cautiously, could provide valuable support in clinical and research settings.
- New
- Research Article
- 10.3174/ajnr.a9229
- Feb 12, 2026
- AJNR. American journal of neuroradiology
- Maria Nadeem + 3 more
In the context of brain tumor characterization, we focused on two key questions which, to the best of our knowledge, have not been explored so far: (a) stability of radiomics features to variability in multi-regional segmentation masks obtained with fully-automatic deep segmentation methods and (b) subsequent impact on predictive performance on downstream prediction tasks. The hypothesis is that highly stable and discriminatory radiomics features lead to generalizable radiogenomics models in brain tumor characterization. We used the publicly available BraTS 2020 dataset for tumor segmentation and IDH prediction. For segmentation, the training cohort included 369 subjects with preoperative multiparametric 3D MRI (T1, T1-Gd, T2, and FLAIR) and manual annotations of tumor subregions (whole tumor, WT; tumor core, TC; enhancing core, EC), while the validation cohort comprised 125 subjects with imaging data only. For IDH prediction, the discovery dataset consisted of 148 subjects (57 IDH-mutant, 91 IDH-wildtype) and the testing dataset included 70 subjects (32 IDH-mutant, 38 IDH-wildtype). Seven state-of-the-art CNNs were used for fully automatic multi-regional tumor segmentation. Radiomics feature stability across segmentation models was assessed using the overall concordance correlation coefficient (OCCC), and discriminatory features were selected with recursive feature elimination with support vector machines (RFE-SVM). Predictive performance was evaluated using AUC, and model stability was quantified by the relative standard deviation (RSD) of AUC. Our study found that highly stable radiomics features were predominantly texture-based (79.1%), mainly extracted from the whole tumor (WT) region (96.1%), and largely derived from T1-Gd (35.9%) and T1 (28.0%) sequences. Mean feature stability (OCCC) was highest for WT (0.87 ± 0.12), followed by TC (0.76 ± 0.13), EC (0.72 ± 0.13), and shape features (0.72 ± 0.11), with shape and EC features showing the lowest stability. Stability filtering reduced non-physiological variability, as reflected by a lower RSD (2.28% vs. 0.64%), and significantly improved predictive performance across eight segmentation schemes (AUC: 0.81 ± 0.02 vs. 0.94 ± 0.006). Robust and generalizable radiogenomics models can be learned with highly stable and discriminatory radiomics features.
- New
- Research Article
- 10.3389/fmed.2026.1769517
- Feb 12, 2026
- Frontiers in Medicine
- Mengyuan Chen
Objective This work aimed to collect joint computed tomography (CT) imaging and peripheral blood transcriptome data from patients with rheumatoid arthritis (RA), and construct a deep learning model for the automatic and precise assessment of bone erosion (BE). It was to screen RA-related inflammation genes regulated by rG4s through bioinformatics methods, explore potential associations between BE imaging phenotypes and molecular regulatory features, and provide hypotheses and clues for investigating the post-transcriptional regulatory mechanisms of RA bone destruction. Methods Clinical data, joint CT images, and peripheral blood RNA sequencing data were collected from the RA group (AG, 148 cases) and the healthy control group (BG, 49 cases) at Yancheng Third People’s Hospital. DESeq2 software was used for differential expression analysis of RNA-seq data. Combined with an inflammation core gene set integrated from multiple databases, RA-related inflammation-related Differentially Expressed Genes (irDEGs) were screened. The rG4detector tool was used to predict rG4s structures in target genes. The Metascape database was used for functional enrichment analysis to identify core candidate genes. An optimized U-Net CNN model was constructed based on the PyTorch framework to achieve automatic segmentation and severity quantification of BE in CT images. Multiple metrics were used to evaluate model performance, and the correlation between candidate gene expression levels and imaging scores was analyzed. Results A total of 67 RA-related irDEGs were screened, of which 42 contained potential rG4s structures. The U-Net CNN model performed excellently in BE segmentation, with pixel-level accuracy, Dice Similarity Coefficient (DSC), sensitivity, and specificity on the test set all at high levels. The model’s quantitative score was significantly correlated with the clinical disease activity score (DAS28). Conclusion CT imaging characteristics of BE in RA patients were closely associated with the expression of rG4s-regulated irDEGs. The deep learning model constructed in this study enabled precise quantification of BE, providing an efficient method for the clinical assessment of RA bone erosion. It also offered a new research perspective and candidate targets for understanding the molecular mechanisms of RA bone destruction at the post-transcriptional regulatory level.
- New
- Research Article
- 10.1002/ima.70313
- Feb 6, 2026
- International Journal of Imaging Systems and Technology
- Satoru Muro + 3 more
ABSTRACT Accurate and efficient image segmentation is crucial in anatomy, histology, and pathology research. Conventional manual approaches are time‐consuming, whereas fully automated artificial intelligence segmentation requires substantial manual correction owing to inaccuracy. To address this, we developed SegRef3D, a tool integrating the Segment Anything Model 2 with multiframe tracking and interactive refinement functions, enabling streamlined segmentation workflows for anatomical research. SegRef3D is implemented as a standalone, offline desktop application that operates entirely in a local environment, eliminating the need for cloud‐based services. SegRef3D provides a unified workflow from data import to segmentation, object tracking, refinement, and three‐dimensional model export. Users can specify segmentation prompts through bounding box input, track objects across multiple frames with start–end range selection, and refine results using intuitive Add to Mask and Erase from Mask tools. Up to 20 objects can be handled simultaneously, with each assigned a unique color. The software supports the Standard Tessellation Language output for three‐dimensional modeling and includes volume measurement functions. The SegRef3D prototype, called Seg&Ref, has been applied in studies using serial histological sections, correlative microscopy with block‐face imaging, and pelvic magnetic resonance imaging. Building on these applications, SegRef3D further enhances usability and enables a seamless workflow. SegRef3D offers an accessible, efficient, and accurate segmentation environment tailored for morphological and anatomical studies. Combining artificial intelligence‐powered automatic segmentation with human‐guided refinement in a user‐friendly graphical user interface bridges the gap between research needs and computational methods. By supporting applications that span traditional anatomy and modern pathology, SegRef3D provides a versatile platform for integrative morphological analysis. Its open‐source availability ensures its broad applicability in research, education, and clinical training in the anatomical sciences.
- New
- Research Article
- 10.1002/ima.70314
- Feb 6, 2026
- International Journal of Imaging Systems and Technology
- Bing Wang + 3 more
ABSTRACT Automatic and accurate medical image segmentation (MIS) can assist doctors identify regions of interest (ROIs) more efficiently, and provide more reliable diagnostic information and treatment options. In recent years, the denoising diffusion model, known for its excellent detail expression ability and good generalization performance, has demonstrated promising effect in MIS. The existing diffusion‐based segmentation networks typically take the original images as the conditional information, and ignore the ambiguity of the ROIs' boundaries, resulting in inconsistent boundary predictions and inaccurate segmentation results. The variability in the size and shape of ROIs poses additional challenges when applying diffusion models to MIS. To solve these problems, we propose a multi‐scale boundary‐enhanced diffusion segmentation network (MBDS‐Net) for MIS to improve the accuracy of boundary segmentation. Specifically, we design a multi‐scale boundary‐aware enhancement (MBE) module to enhance the boundary restoration ability of the ROIs of different scales and shapes. Besides, we propose an attention denoising residual (ADR) module that focuses on extracting key features during the progressive denoising process, reducing the impact of noise on segmentation and enhancing the robustness of the model. Furthermore, we adopt deep supervision in the decoder to enhance the training convergence and feature discriminability of the diffusion model. We conduct experiments on three public datasets and compare our model to the existing advanced segmentation models to demonstrate its superiority in MIS. The code is available at https://github.com/FionaYeager/MBDS‐Net .
- New
- Research Article
- 10.1016/j.aanat.2026.152803
- Feb 5, 2026
- Annals of anatomy = Anatomischer Anzeiger : official organ of the Anatomische Gesellschaft
- Pınar Cihan + 2 more
Image Processing-Based Automatic Tooth Segmentation and Age Estimation in Sheep Using Deep Learning.
- New
- Research Article
- 10.3389/fped.2026.1673925
- Feb 5, 2026
- Frontiers in Pediatrics
- Sumin Jung + 6 more
Introduction Identifying the thoracic vertebra visible on chest radiographs is a standard practice to assess proper position of a tube and catheter tips within their designated anatomical target regions in critically ill newborn infants. We introduce a fully automated deep learning system based on the nnU-Net architecture for segmenting and labeling T1, T7, and T12 in neonatal chest radiographs. Methods We retrospectively collect 14,660 neonatal chest radiographs from 10 university hospitals in Korea, including both infants with tubes or catheters and those without. All images were deidentified and annotated for T1, T7, and T12 vertebrae using rectangular bounding boxes, validated by pediatricians. We split the dataset into training (11,860), validation (1,400), and test (1,400) sets, maintaining an even distribution by gestational age and birth weight. Results The automatic segmentation algorithm demonstrated excellent agreement with human-annotated segmentation for the T1, T7 and T12 vertebrae [Dice similarity coefficient (DSC): 0.8327, 95% CI: 0.8237–0.8418; 0.8322, 95% CI: 0.8213–0.8432; 0.7998, 95% CI: 0.7864–0.8133, respectively]. To identify the approximate location of each vertebra, a relatively modest DSC threshold of 0.50 or 0.60 already yielded an accuracy above 90% for T1, T7, and T12. Conclusion Our deep learning-based automated algorithm built on the nnU-Net framework could accurately segment and label T1, T7, and T12 thoracic vertebrae in neonatal chest radiographs. This artificial intelligence-driven approach can map anatomical target regions based on thoracic vertebrae for assessing the positioning of a tube and catheter tips in a neonatal intensive care unit.
- New
- Research Article
- 10.1186/s13244-026-02213-8
- Feb 2, 2026
- Insights into Imaging
- Congyu Tang + 11 more
ObjectivesThe aim of this study was to develop an artificial intelligence model to automatically differentiate between non-neoplastic and neoplastic gallbladder polyps, while also distinguishing benign from malignant polyps.Materials and methodsPatients with gallbladder polyps who underwent cholecystectomy from January 2022 to June 2023 were recruited from two hospitals retrospectively. Conventional ultrasound findings and clinical characteristics of patients before cholecystectomy were acquired. Ultrasound image blocks of gallbladder lesions were automatically segmented by the Unet network for diagnosis. A fusion deep learning model based on dual-mode ultrasound (grey-scale ultrasound and colour Doppler flow imaging) was established to diagnose gallbladder polyps and validated in the validation and test set. Finally, we compared the diagnostic efficiency of the model with that of radiologists and guidelines.ResultsA total of 339 patients (mean ages 53.17 ± 15.89, 182 females) were enroled in this study. The Dice coefficient and intersection over union (IoU) value of the automatic segmentation based on the Unet-efficientnet-b4 network were 0.912 and 0.838. In differentiating non-neoplastic from neoplastic polyps, the integrative deep learning (IDL) model showed area under the curves (AUCs) of 0.829 and 0.802 in validation and test sets, respectively. In differentiating benign and malignant polyps, the IDL model showed AUCs of 0.844 and 0.839 in validation and test sets, respectively. In the test set, the diagnostic performance of two junior radiologists was improved with the assistance of the IDL model.ConclusionThe IDL model based on dual-mode ultrasound could achieve accurate and automatic segmentation of gallbladder lesions, and showed excellent diagnostic performance for diagnosing gallbladder polyps.Critical relevant statementWe developed a deep learning model based on conventional ultrasound that performs gallbladder segmentation while differentiating neoplastic from non-neoplastic polyps and benign from malignant polyps.Key PointsDiagnosing gallbladder polyps through a deep learning model based on conventional ultrasound presents challenges.IDL model enables automated segmentation of the gallbladder and diagnosis of gallbladder polyps.The IDL model is a reliable tool to assist junior radiologists in diagnosis and has potential for reducing unnecessary cholecystectomies.Graphical
- New
- Research Article
- 10.1016/j.cger.2025.08.012
- Feb 1, 2026
- Clinics in geriatric medicine
- Nashwa Masnoon + 8 more
Muscle Composition as a Novel Prognostic Tool for Pain, Frailty, and Sarcopenia.
- New
- Research Article
- 10.1002/rcs.70140
- Feb 1, 2026
- The international journal of medical robotics + computer assisted surgery : MRCAS
- Baoping Zhu + 4 more
The foetal head's automatic segmentation from ultrasound imagery is considered a key step in prenatal examination. However, achieving high-quality semi-supervised foetal head image segmentation remains challenging due to low image resolution, unclear boundaries, and inconsistencies between labelled and unlabelled data. To overcome these obstacles, we propose MCPNet, a morphological constraint-based copy-paste network for semi-supervised foetal head segmentation, incorporating score-guided morphological refinement (SMR) and copy-paste mixing augmentation (CPMA). SMR employs weighted scores derived from Sobel operators and Euclidean transform to ensure boundary consistency. Additionally, to mitigate the distribution gap between labelled and unlabelled data, we introduce CPMA. This method uses random cropping to swap foreground and background between labelled and unlabelled data. On the HC18 and PSFH benchmarks, our method achieves Dice scores of 93.72% and 92.31% respectively with 20% labelled data. The results demonstrate our superior performance and clinical potential.
- New
- Research Article
1
- 10.1016/j.bspc.2025.108476
- Feb 1, 2026
- Biomedical Signal Processing and Control
- Elena Goyanes + 5 more
Inter-expert reliability in multi-field-of-view automatic drusen segmentation analysis using optical coherence tomography
- New
- Research Article
- 10.1016/j.media.2025.103882
- Feb 1, 2026
- Medical image analysis
- Qibiao Wu + 2 more
Direction-Aware convolution for airway tubular feature enhancement network.
- New
- Research Article
- 10.1016/j.imavis.2025.105878
- Feb 1, 2026
- Image and Vision Computing
- Yadi Gao + 5 more
MSBC-Segformer: An automatic segmentation model of clinical target volume and organs at risk in CT images for radiotherapy after breast-conserving surgery
- New
- Research Article
- 10.54097/57kwmc65
- Jan 30, 2026
- Journal of Computer Science and Artificial Intelligence
- Qin Zhang + 6 more
Colon polyps are one of the common intestinal lesions in clinical practice and also the most typical type of precancerous lesions for colorectal cancer. Accurate segmentation is crucial for computer-aided diagnosis systems. Deep learning methods based on convolutional neural networks (CNNs) have been applied to the automatic segmentation of colorectal polyps, but they still face challenges such as inter-polyp differences, intra-polyp variations, and changes in imaging environments. These issues make it difficult to meet the clinical requirements for segmentation accuracy. Therefore, integrating pathological prior knowledge into deep learning models to extract colorectal polyp-related features has become a core approach to address the aforementioned problems. This review summarizes the current research status of colorectal polyp segmentation technologies that combine convolutional networks and pathological features, introduces the research background and significance of this study, and compares representative domestic and international research works. On this basis, the existing problems are summarized and the future development trends are analyzed, aiming to provide certain references for subsequent research.
- New
- Research Article
- 10.1097/js9.0000000000004879
- Jan 29, 2026
- International journal of surgery (London, England)
- Chenxi Lyu + 8 more
Gastroenteropancreatic neuroendocrine neoplasms (GEP-NENs) are heterogeneous tumors with rising incidence, necessitating precise preoperative grading for treatment planning. Existing imaging techniques and endoscopic biopsies often fall short due to insufficient markers and tissue samples. Body composition influences tumor biology, yet traditional 2D assessments are time-consuming and lack objectivity. This study aimed to develop a rapid non-invasive predictive model by integrating automatic segmented abdominal volumetric body composition with machine learning to differentiate between low-grade and high-grade GEP-NENs. This multicenter retrospective cohort study enrolled 633 patients with GEP-NENs from three institutions. Patients were divided into: Training set (n=403) and internal validation (n=174) (7:3 ratio from Hospital 1); test set (n= 56 from 2 other hospitals). An nnUNetv2-based automatic segmentation algorithm for abdominal fat tissue and skeletal muscle on arterial-phase CT was applied. Visceral fat index, subcutaneous fat index, intermuscular fat index and skeletal muscle index were calculated. Features with a P-value < 0.05 were selected using univariate logistic regression and included in the prediction model built using the extreme gradient boosting algorithm. Receiver operating characteristic (ROC) curves and decision curve analysis (DCA) were performed to evaluate the utility of the model. SHapley Additive exPlanations (SHAP) was conducted to enhance model interpretability and visualization. The automatic segmentation achieved a Dice coefficient of 0.98. For pathological grading, the model built using body composition parameters achieved an AUC of 0.863 in the training set, 0.750 in the validation set, and 0.717 in the test set. SHAP analysis revealed that the relative intermuscular adipose tissue (rIMAT) contributed the most among the body composition parameters to the model decision-making, and rIMAT levels were higher in P53-mutant and CK19-positive cases compared to negative cases. Auto-segmented abdominal body composition combined with a machine learning-based model could provide an assisted, non-invasive tool for predicting pathological grade in GEP-NENs.
- New
- Research Article
- 10.1038/s41597-026-06620-w
- Jan 29, 2026
- Scientific data
- Guohui Li + 4 more
In orthognathic surgery, accurate segmentation of the pterygopalatine and mandibular canals in maxillofacial cone beam computed tomography (CBCT) scans is crucial. It provides critical information to prevent nerve damage during surgery and significantly reduces the risk of surgical complications. However, the high cost of data collection, strict patient privacy protection, and ethical constraints have hampered the performance of existing deep learning methods for pterygopalatine and mandibular canals segmentation, limiting their practical applicability in clinical settings. To address this challenge and advance the development of pterygopalatine and mandibular canal segmentation techniques in maxillofacial CBCT scans, we carefully constructed and made publicly available a large dataset for pterygopalatine and mandibular canal segmentation in maxillofacial CBCT scans. This dataset includes 191 patient cases and comprehensively covers the key anatomical structures of the maxillary pterygopalatine canal and the mandibular canal, both of which are crucial in orthognathic surgery. Notably, this dataset is the first to include data on the maxillary pterygopalatine canal, filling a significant gap in this field. The release of this dataset will greatly accelerate the development of deep learning-based segmentation methods, provide clinicians with more accurate reconstruction tools, and ultimately improve the safety and efficiency of surgical procedures.
- New
- Research Article
- 10.1088/1361-6560/ae387c
- Jan 28, 2026
- Physics in Medicine & Biology
- Jinkui Hao + 6 more
Objective.Non-contrast cardiac computed tomography (NCCT) offers a low-dose, cost-effective alternative to coronary CT angiography (CCTA) for large-scale coronary artery disease screening. However, automatic segmentation on NCCT is severely hindered by poor vessel visibility and a scarcity of annotated datasets. This study aims to overcome these limitations by developing a method for accurate coronary artery segmentation (CAS) from NCCT images without requiring manual annotations.Approach.We propose synthetic-data-driven CAS(SynCAS), a deep learning framework trained entirely on synthetic data. First, we developed a comprehensive generation pipeline to create a diverse, large-scale synthetic NCCT dataset with perfect ground truth, modeling the physics of NCCT imaging. Second, to address the low contrast-to-noise ratio, we introduced an anatomy-informed contrastive learning strategy. Unlike traditional methods, this strategy utilizes voxel-level pseudo-negative samples guided by anatomical priors, enabling the model to effectively distinguish coronary arteries from visually similar background structures and reduce false positives.Main results.The proposed method was evaluated on both a public NCCT dataset and an in-house clinical dataset. Experimental results demonstrate that SynCAS consistently outperforms state-of-the-art unsupervised and domain-adaptation approaches. The model exhibits strong generalization capabilities across different datasets despite being trained without real-world annotations.Significance.SynCAS provides a robust solution for analyzing coronary arteries in non-contrast imaging, potentially facilitating retrospective analysis and large-scale population screening for cardiovascular risk without the radiation dose and contrast agent risks associated with CCTA. Code and model weights will be available at:https://github.com/Advanced-AI-in-Medicine-and-Physics-Lab/SynCAS.git.
- New
- Research Article
- 10.4329/wjr.v18.i1.115503
- Jan 28, 2026
- World Journal of Radiology
- Yu-Han Yang + 1 more
BACKGROUNDChildren with hepatoblastoma (HB) remain high heterogeneity with distinct survival outcomes among individuals after surgical resection. Therefore, it’s essential to identify high-risk patients with poor outcomes before surgery in order to add appropriate neoadjuvant chemotherapy for improving prognosis.AIMTo evaluate the performance of a deep learning (DL)-based radiomics (DLBR) score at predicting event-free survival (EFS) in patients with HB at the early stage who underwent surgical resection.METHODSA total of 106 patients were included retrospectively at two hospitals who underwent magnetic resonance imaging scanning and surgical excision, and were assigned into the training cohort (n = 74) from one institution and the testing cohort (n = 32) from the other institution. The widely adopted clinicopathologic variables were collected, and the magnetic resonance imaging-derived DL-based features were extracted through automatic segmentation. We developed a DLBR score based on DL-based features and an integrated clinical-DL nomogram model, and validated them externally.RESULTSThe DLBR score was generated incorporating four DL-based features, including three TI-derived features and one T2-derived feature. The integrated clinical-DL nomogram was constructed based on the Pretreatment Extension of Disease stage, alpha-fetoprotein concentration, and the DLBR score. The integrated nomogram had relatively better prognostic and calibration abilities and less opportunity for prediction error compared with the clinicopathologic predictors alone and the DLBR score alone in both training and external validation. Additionally, the DLBR score could stratify the HB patients into two EFS-related risk subgroups accurately, and showed fine distinction abilities to identify patients with different survival outcomes within identical subgroups of clinical predictors.CONCLUSIONThe DLBR score acted as a noninvasive and reliable tool for predicting EFS in early-stage HB patients receiving survival resection, and might instruct therapeutic plans for improving prognosis.
- New
- Research Article
- 10.3389/fonc.2025.1612984
- Jan 27, 2026
- Frontiers in Oncology
- Hao Qiu + 4 more
ObjectiveTo investigate the feasibility and clinical value of RT-Mind, a convolutional neural network (CNN)-based auto-segmentation software, in delineating clinical target volume (CTV) and pelvic bone marrow (PBM) as organs at risk (OARs) during postoperative radiotherapy for cervical cancer.MethodsA retrospective analysis was conducted on 55 cervical cancer patients who underwent postoperative radiotherapy between March 2024 and January 2025. Manual delineations by experienced radiation oncologists were compared with auto-segmentations generated by RT-Mind for CTV and OARs (including rectum, bladder, bowel bag, femoral heads, and bone marrow). Evaluation metrics included Dice Similarity Coefficient (DSC), Hausdorff Distance (HD), Jaccard Index (JAC), and Sensitivity Index (SI), along with time efficiency comparisons between manual and automatic contouring.ResultsThe auto-segmentation achieved favorable accuracy across multiple structures. For bone marrow, the DSC, HD, JAC, and SI were 0.89 ± 0.05, (2.39 ± 0.90) mm, 0.80 ± 0.11, and 0.87 ± 0.04, respectively. Bladder and femoral heads also showed high concordance, with DSCs exceeding 0.91 and HDs below 2 mm. Auto-segmentation significantly reduced contouring time across all structures; for CTV, the average time decreased from (4151.54 ± 300.23) seconds to(45.82 ± 2.00)seconds (t=-102.10,p< 0.001).From a dosimetric perspective, auto-segmentation achieved comparable CTV coverage to manual methods (P > 0.05), but showed statistically significant improvements in organ-at-risk sparing for bone marrow, small bowel, and rectum (P< 0.05). No clinically relevant differences were detected for bladder or femoral head doses.ConclusionThe RT-Mind software based on a U-Net architecture demonstrates high accuracy and efficiency in segmenting CTV and OARs in postoperative radiotherapy for cervical cancer, particularly in delineating pelvic bone marrow. It effectively reduces contouring time and inter-observer variability, offering promising clinical applicability.
- New
- Research Article
- 10.1109/tmi.2026.3658169
- Jan 26, 2026
- IEEE transactions on medical imaging
- Qihua Chen + 4 more
Reconstructing neurons from large electron microscopy (EM) datasets for connectomic analysis presents a significant challenge, particularly in segmenting neurons of complex morphologies. Previous deep learning-based neuron segmentation methods often rely on pixel-level image context and produce extensive oversegmented fragments. Detecting these split errors and merging the split neuron segments are non-trivial for various neurons in a large-scale EM data volume. In this work, we exploit multimodal features in the full workflow of automatic neuron proofreading. We propose a novel connection point detection network that utilizes both global 3D morphological features and high-resolution local image context to extract candidate segment pairs from massive adjacent segments. To effectively fuse the 3D morphological feature and the dense image features from very different scales, we design a proposal-based image feature sampling to improve the efficiency of multimodal cross-attentions. Integrating the connection point detection network with our connectivity prediction network which also utilizes multimodal features, we make a fully automatic neuron segment merging pipeline, closely imitating human proofreading. Comprehensive experimental results verify the effectiveness of the proposed modules and demonstrate the robustness of the entire pipeline in large-scale neuron reconstruction. The code and data are available at https://github.com/Levishery/ Neuron-Segment-Connection-Prediction.