Articles published on Binary segmentation
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
845 Search results
Sort by Recency
- Research Article
- 10.1016/j.cmpb.2025.109216
- Jan 2, 2026
- Computer methods and programs in biomedicine
- Ibrahim Yilmaz + 7 more
SegRenal: AI-Driven segmentation of frozen sections in transplant kidney biopsies - A comparative analysis of deep learning models.
- Research Article
- 10.1080/10298436.2025.2606111
- Dec 24, 2025
- International Journal of Pavement Engineering
- Francisco Contreras + 2 more
ABSTRACT In recent years, Convolutional Neural Networks (CNNs) have successfully automated pavement crack segmentation, outperforming traditional methods. Although researchers have proposed multiple models for binary segmentation, their performance in segmenting cracks according to their severity (multiclass segmentation) has not yet been tested, despite the vital importance of crack severity identification for pavement maintenance operations. Therefore, this study evaluates the performance of three CNN models from the crack segmentation literature (U-Net B, CrackNet II, and CrackNet V), and several state-of-the-art semantic segmentation models (CCNet, DANet, Segformer, and OCRNet), for binary and multiclass segmentation. All models are trained and evaluated for both binary and multiclass segmentation using 3D asphalt pavement images collected from different highways in Chile. For binary segmentation, U-Net B achieved the highest performance with an F1-score of 0.76, correctly identifying most crack pixels. For multiclass segmentation, OCRNet achieved the highest F1 score of 0.58. Despite the relatively low F1-score in multiclass segmentation, the results demonstrate that CNN models can identify multiple severity levels on cracks based on their local width. However, they are unable to assign a unique severity to cracks that exhibit varying widths along their length, which reduces their F1-score.
- Research Article
- 10.3390/rs18010028
- Dec 22, 2025
- Remote Sensing
- Changhui Lee + 6 more
Vegetation segmentation in Very High-Resolution (VHR) satellite imagery has become an essential task for ecological monitoring, supporting diverse applications such as large-scale vegetation conservation and detailed segmentation of small local areas. In particular, multi-class vegetation segmentation, which distinguishes various vegetation types beyond simple binary segmentation of vegetation and non-vegetation, enables detailed analysis of subtle ecosystem changes and has gained increasing importance. However, the annotation of VHR satellite imagery requires extensive time and effort, resulting in a lack of datasets for vegetation segmentation, especially those including multi-class annotations. To address this limitation, this study proposes MultiVeg, a deep learning dataset based on VHR satellite imagery for detailed multi-class vegetation segmentation. MultiVeg includes preprocessed 0.5 m resolution images collected by the KOMPSAT-3 and 3A satellites from 2014 to 2023, covering diverse environments such as urban, agricultural, and forest regions. Each image was carefully annotated by experts into three semantic classes, which are Background, Tree, and Low Vegetation, and validated through a structured quality check process. To verify the effectiveness of MultiVeg, seven representative semantic segmentation models, including convolutional neural network and Transformer-based architectures, were trained and comparatively analyzed. The results demonstrated consistent segmentation performance across all classes, confirming that MultiVeg is a high-quality and reliable dataset for deep learning-based multi-class vegetation segmentation research using VHR satellite imagery. The MultiVeg will be publicly available through GitHub (release v1.0), serving as a valuable resource for advancing deep leaning-based vegetation segmentation research in the remote sensing field.
- Research Article
- 10.1038/s41598-025-31492-2
- Dec 18, 2025
- Scientific Reports
- István Reményi + 3 more
Accurate recognition of cracks in asphalt pavements is fundamental to proactive maintenance and infrastructure safety management. From the repairing perspective, the identification process of the ruined surface areas on the pavement is more about finding the cracks and their location without requiring precise curve estimation. If we treat the problem as a binary semantic segmentation, we can translate this statement as the recall capability is higher ranked than the given model’s precision, where one of the main challenges is the varying crack widths. In this paper, we focus on modifying a standard U-Net network with the voting-based segmentation head called HoughNet, created for object detection and segmentation tasks, and with an additional reconstruction head for pixel-level information preservation. While combining them with the most appropriate loss functions, we measured performance on two datasets with slightly different crack characteristics. We showed that the model trained and evaluated on the Crack500 dataset can find more relevant cracked surface parts with a similar magnitude of precision compared to the competitor architectures. To see the limitations of the concept, we also report results on GAPS384 with thinner crack lines, where the concept can shift the model performance towards finding the best F1 Scores at both the dataset and image scale.
- Research Article
- 10.54254/2755-2721/2025.ld30738
- Dec 18, 2025
- Applied and Computational Engineering
- Yicheng Shao
Artificial intelligence (AI) shows great potential for improving surgical efficiency, precision, and autonomy in surgical robotic systems. However, the robustness of deep learning-based algorithms remains a critical challenge as the surgical environments shows much variance in real application. Most deep learning-based segmentation models, though highly effective on benchmarking datasets, often fail during unforeseen nonadversarial corruptions such as occlusions, bleeding, or low brightness. In this study, we introduce a domain-specific augmentation strategy to enhance model robustness against possible surgical corruptions that is not seen in the training data. Our method simulates key corruptions, including blood simulation, brightness adjustment, and contrast adjustment. Based on the SegSTRONG-C benchmark, we evaluate a baseline U-Net model on a binary surgical tool segmentation task. While the baseline shows strong performance on clean images, its accuracy drops substantially on the corrupted test data. Incorporating our proposed augmentations significantly improves performance on corrupted inputs while preserving accuracy on the clean domain. These findings underscore the importance of specific augmentation for models robustness and demonstrate a practical pathway toward more reliable and generalizable segmentation models for real-world surgical robotics applications
- Research Article
- 10.3390/a18120796
- Dec 16, 2025
- Algorithms
- Muhammad Shahrul Zaim Ahmad + 4 more
Image segmentation is one of the important applications of deep learning models, such as U-Net and Mask R-CNN, in medical imaging. The image segmentation process enables automated extraction of important information within images, including spine X-rays, saving medical practitioners hours of work. However, for X-ray images, low contrast and noise may affect the quality of the images and consequently reduce the effectiveness of the deep learning models in providing a robust segmentation. Image enhancement prior to feeding the images to segmentation models can help to overcome the issues caused by the low-quality images. This paper aims to evaluate the effects of three image enhancement methods, namely, the contrast-limited adaptive histogram equalization (CLAHE), histogram equalization (HE), and anisotropic diffusion (AD), for improving image segmentation performance of Mask R-CNN, non-transfer learning Mask R-CNN, and U-Net. The findings show image enhancement methods provide significant improvement to the U-Net, and, interestingly, no noticeable improvement of performance on Mask R-CNN is observed. The application of HE for transfer learning Mask R-CNN achieved the highest Dice score of 0.942 ± 0.001 for binary segmentation. The randomly initialized Mask R-CNN obtains the highest DSC of 0.941 ± 0.002 on the same task. On the other hand, for U-Net, despite the presence of statistically significant change by applying image enhancement methods, the model achieves a maximum Dice score of 0.916 ± 0.003, lower than Mask R-CNN with and without transfer learning. A study on image enhancement methods and recent deep learning algorithms is necessary to better understand the effect of image enhancement techniques using deep learning.
- Abstract
- 10.1002/alz70856_106019
- Dec 1, 2025
- Alzheimer's & Dementia
- Adam Martersteck + 12 more
BackgroundWhite matter hyperintensities (WMHs) are critical markers of cerebrovascular health and neurodegenerative disease. Accurate and reproducible quantification of WMHs is essential for characterizing vascular contributions to aging, cognition, and Alzheimer disease and related dementias. Deep learning pipelines have emerged as powerful tools for WMH segmentation, yet limited research compares their performance using expert evaluation as the benchmark. Here, we assess the performance of five deep learning WMH segmentation pipelines by comparing their outputs through blinded neuroradiologist ratings.MethodWe processed FLAIR scans from 100 older adults (aged 80 and older) enrolled in the SuperAging Research Initiative. 3D T2‐weighted FLAIR and T1‐weighted MPRAGE sequences followed the ADNI‐3 protocol, acquired across five sites, using 3T scanners from three vendors (GE, Siemens, Philips). Binary segmentation masks from five deep learning pipelines were utilized: sysu_media, ANTSx, DeepWMH, TrUE‐Net, and HyperMapp3r. A neuroradiologist (C.V.) evaluated the per‐participant level randomized segmentation masks, overlaid on the FLAIR and T1‐weighted image, using a 7‐point Likert‐type scale, where 1 indicated “poor segmentation” and 7 indicated “excellent segmentation". Ratings were based on anatomical plausibility and alignment with WMH voxels visible on FLAIR. To compare scores, a Kruskal‐Wallis test and post‐hoc Mann‐Whitney pairwise comparisons were used.ResultThe Kruskal‐Wallis test revealed significant differences in segmentation quality across the five pipelines (p = 7.73 x 10‐43). Post‐hoc Mann‐Whitney tests showed ANTSx (mean rating = 5.59 ± 1.17) performed significantly better than all other pipelines (all p < 0.00001), while HyperMapp3r (mean rating = 2.33 ± 1.22) consistently received significantly lower ratings (all p < 0.00001). DeepWMH (mean rating = 4.45 ± 1.34), sysu_media (mean rating = 4.18 ± 1.20), and TrUE‐Net (mean rating = 4.49 ±1.18) had comparable ratings, with no significant differences between the three.ConclusionThis study highlights significant variability in the quality of WMH segmentation across commonly used deep learning pipelines when benchmarked against expert evaluation. Among the evaluated pipelines, ANTSx demonstrated superior performance, producing clinically plausible segmentations with high anatomical fidelity. These findings underscore the importance of expert validation in selecting and refining automated segmentation tools for research and clinical applications in aging and neurodegenerative disease.
- Research Article
- 10.1007/s10851-025-01274-6
- Dec 1, 2025
- Journal of Mathematical Imaging and Vision
- Jun Liu + 3 more
Abstract Convex shapes (CSs) are common priors for image segmentation. It is important to design proper techniques to represent CS. So far, it remains a challenge to guarantee that the output objects from deep convolution neural networks (DCNNs) are CS. In this work, we propose a convex shape technique that can be easily integrated into the commonly used DCNN architectures and guarantee that outputs are CS. This method is flexible, and it can handle multiple objects and allow some of them to be convex. Our method is based on the dual representation of the sigmoid activation function in DCNNs. Moreover, our method can integrate spatial regularization and other shape priors by using a soft threshold dynamics (STD) method. This regularization can make the boundary curves of the segmented objects simultaneously smooth and convex. We design a very stable active set projection algorithm to solve our model numerically. This algorithm can form a new plug-and-play DCNN layer called CS-STD, whose outputs must be a nearly binary segmentation of convex objects. In the CS-STD block, the convexity information can be propagated to guide the DCNN in both forward and backward propagation during the training and prediction. As an application example, we apply the convexity prior layer to the segmentation of the retinal fundus image using the popular U-Net, BCDU, MedSAM and DeepLabV3+ as the backbone networks. Experimental results on several public datasets show that our method is efficient and outperforms classic DCNN segmentation methods.
- Abstract
- 10.1002/alz70862_110016
- Dec 1, 2025
- Alzheimer's & Dementia
- Adam Martersteck + 12 more
BackgroundWhite matter hyperintensities (WMHs) are critical markers of cerebrovascular health and neurodegenerative disease. Accurate and reproducible quantification of WMHs is essential for characterizing vascular contributions to aging, cognition, and Alzheimer disease and related dementias. Deep learning pipelines have emerged as powerful tools for WMH segmentation, yet limited research compares their performance using expert evaluation as the benchmark. Here, we assess the performance of five deep learning WMH segmentation pipelines by comparing their outputs through blinded neuroradiologist ratings.MethodWe processed FLAIR scans from 100 older adults (aged 80 and older) enrolled in the SuperAging Research Initiative. 3D T2‐weighted FLAIR and T1‐weighted MPRAGE sequences followed the ADNI‐3 protocol, acquired across five sites, using 3T scanners from three vendors (GE, Siemens, Philips). Binary segmentation masks from five deep learning pipelines were utilized: sysu_media, ANTSx, DeepWMH, TrUE‐Net, and HyperMapp3r. A neuroradiologist (C.V.) evaluated the per‐participant level randomized segmentation masks, overlaid on the FLAIR and T1‐weighted image, using a 7‐point Likert‐type scale, where 1 indicated "poor segmentation" and 7 indicated "excellent segmentation". Ratings were based on anatomical plausibility and alignment with WMH voxels visible on FLAIR. To compare scores, a Kruskal‐Wallis test and post‐hoc Mann‐Whitney pairwise comparisons were used.ResultThe Kruskal‐Wallis test revealed significant differences in segmentation quality across the five pipelines (p = 7.73 x 10‐43). Post‐hoc Mann‐Whitney tests showed ANTSx (mean rating = 5.59 ± 1.17) performed significantly better than all other pipelines (all p < 0.00001), while HyperMapp3r (mean rating = 2.33 ± 1.22) consistently received significantly lower ratings (all p < 0.00001). DeepWMH (mean rating = 4.45 ± 1.34), sysu_media (mean rating = 4.18 ± 1.20), and TrUE‐Net (mean rating = 4.49 ±1.18) had comparable ratings, with no significant differences between the three.ConclusionThis study highlights significant variability in the quality of WMH segmentation across commonly used deep learning pipelines when benchmarked against expert evaluation. Among the evaluated pipelines, ANTSx demonstrated superior performance, producing clinically plausible segmentations with high anatomical fidelity. These findings underscore the importance of expert validation in selecting and refining automated segmentation tools for research and clinical applications in aging and neurodegenerative disease.
- Research Article
1
- 10.1186/s40644-025-00953-2
- Nov 13, 2025
- Cancer Imaging
- Aqib Ali + 5 more
BackgroundBrain tumor classification using Magnetic Resonance Imaging (MRI) is crucial for diagnosis and treatment planning. The differentiation between malignant and benign brain tumors and their subtypes remains a challenging task that can benefit from advanced computational techniques.PurposeThis study uses an MRI dataset to explore the effectiveness of deep learning (DL) and machine learning (ML) approaches for classifying brain tumors.Materials and methodsA dataset comprising 1200 DICOM brain tumor MRI images, representing malignant and benign tumors with six subtypes, was prepared. Each image was converted to a 512 × 512-pixel digital format, selecting 200 images per tumor class. Image quality was enhanced using sharpening algorithms and mean filtering. The proposed edge refined binary histogram segmentation (ER-BHS) was applied to extract hybrid features from the regions of interest. Feature optimization through a correlation-based method reduced the dataset to 11 key features. Multiple classifiers, including DL, neural networks, and ML models, were evaluated on the optimized dataset using 10-fold cross-validation.ResultsAmong the tested models, the random committee (RC) classifier demonstrated superior performance, achieving an accuracy of 98.61% on the optimized hybrid brain tumor MRI dataset. Overall, DL and ML methods effectively automated brain tumor classification.ConclusionThe promising results affirm the potential of DL and ML approaches to enhance medical image analysis and improve diagnostic accuracy in brain tumor classification, potentially revolutionizing clinical workflows.
- Research Article
- 10.1038/s41598-025-20721-3
- Nov 10, 2025
- Scientific Reports
- Jaysel Theresa Silveira + 2 more
Accurate segmentation of spinal structures, including vertebrae, intervertebral discs (IVDs), and the spinal canal, is crucial for diagnosing lumbar spine disorders. Deep learning-based semantic segmentation has significantly improved accuracy in medical imaging. This study proposes an enhanced U-Net incorporating an Inception module for multi-scale feature extraction and a dual-output mechanism for improved training stability and feature refinement. The model is trained on the SPIDER lumbar spine MRI dataset and evaluated using Accuracy, Precision, Recall, F1-score, and mean Intersection over Union (mIoU). Comparative analysis with the baseline models—U-Net, ResUNet, Attention U-Net, and TransUNet—shows that the proposed model achieves superior segmentation accuracy, with improved boundary delineation and better handling of class imbalance. An evaluation of loss functions identified Dice loss as the most effective, enabling the model to achieve an mIoU of 0.8974, an accuracy of 0.9742, a precision of 0.9417, a recall of 0.9470, and an F1-score of 0.9444, outperforming all four baseline models. The Inception module enhances feature extraction at multiple scales, while the dual-output mechanism improves gradient flow and segmentation consistency. Initially focused on binary segmentation, the approach was extended to multiclass segmentation, enabling separate identification of vertebrae, IVDs, and the spinal canal. These enhancements offer a more precise and efficient solution for automated lumbar spine segmentation in MRI, thereby supporting enhanced diagnostic workflows in medical imaging.
- Research Article
- 10.1007/s11548-025-03532-9
- Oct 23, 2025
- International journal of computer assisted radiology and surgery
- Sara Yavari + 2 more
This study proposes DMCIE (diffusion model with concatenation of inputs and errors) to enhance binary brain tumor segmentation from multimodal MRI scans. Accurate voxel-wise tumor localization remains challenging due to variability in tumor size, shape, and imaging conditions, impacting clinical diagnosis and treatment planning. DMCIE employs a two-stage framework: a 3D U-Net first predicts an initial tumor mask from multimodal MRI inputs (T1, T1ce, T2, FLAIR), and an error map highlighting discrepancies with the ground truth is generated. This error map, concatenated with the original inputs, is refined through a diffusion model that iteratively corrects misclassified and boundary regions. The proposed DMCIE method was evaluated on the BraTS2020 dataset. Compared to the initial U-Net segmentation, DMCIE improved segmentation performance by +5.18% Dice and 2.07 mm HD95 compared to the initial U-Net segmentation. It shows improvements in boundary accuracy and segmentation across diverse tumor shapes, and maintains spatial coherence, even in fragmented cases. DMCIE introduces an effective error-guided correction mechanism for binary brain tumor segmentation, using multimodal MRI data to enhance segmentation accuracy. By modeling and correcting segmentation errors during diffusion, DMCIE achieves anatomically precise and well-localized tumor segmentation.
- Research Article
- 10.1080/01621459.2025.2552513
- Oct 17, 2025
- Journal of the American Statistical Association
- Wenyang Zhang + 2 more
In panel data analysis, individual attributes are of importance in many real applications. With the advancement of data collection, it is often possible to acquire enough information for individual attributes in a collected panel dataset, and data from other individuals may contain the information for the attributes of the individual under concern. Homogeneity pursuit is an important topic in panel data analysis when individual attributes are of interest. Existing approaches are mainly based on either penalized estimation or binary segmentation, and require reasonably large cluster sizes. However, in practice, people often come across panel datasets with small cluster sizes, that is short panel datasets. In this article, we propose a new approach to homogeneity pursuit in panel data analysis, which applies to both long and short panel datasets. Our approach differs from existing methods and enjoys good asymptotic properties that justify its adoption. Extensive simulation studies show that the new approach works very well even when cluster sizes are too small to get any estimators based on one individual, outperforming all alternative methods in all conducted cases. Finally, we apply the new approach to a real dataset and illustrate its practical usefulness and superiority. Supplementary materials for this article are available online, including a standardized description of the materials available for reproducing the work.
- Research Article
- 10.1080/10298436.2025.2569616
- Oct 17, 2025
- International Journal of Pavement Engineering
- Eskndir Getachew Denu + 3 more
ABSTRACT Block pavement significantly impacts user convenience, yet its evaluation and maintenance methods are inefficient and resource-intensive. This study applies deep learning models to detect distress in block pavement using a dataset of 10,298 images with five distress types: cracks, broken pavers, missing pavers, excessive joint width, and utility structures. The Hybrid TransUNet model outperformed all the compared models in multiclass and binary segmentation tasks. It effectively segmented broken pavers, missing pavers, excessive joint width, and utility structures but faced challenges with crack detection. Combining outputs from individual binary models for multiclass masking improved IoU accuracy by 5.97%, but this approach is resource-intensive and less practical. These findings highlight the potential of deep learning, especially the TransUNet hybrid model, for enhancing the accuracy and efficiency of automated block pavement distress detection tools.
- Research Article
- 10.1186/s12889-025-24473-7
- Oct 6, 2025
- BMC Public Health
- Liping Yang + 6 more
ObjectiveBrucellosis represents a significant global challenge; however, epidemiological research on brucellosis in Xinjiang from a change point perspective has been lacking. This study aims to identify significant change points and trends, as well as forecast the number of brucellosis cases in Xinjiang, China, thereby offering recommendations for its prevention and control.MethodsChange points were identified through binary segmentation of the full dataset. The Autoregressive Integrated Moving Average (ARIMA) Model, Support Vector Regression (SVR), and ARIMA-SVR models were employed to forecast the number of reported brucellosis cases. Model performance was evaluated using RMSE, MAE, and MAPE, and the optimal model was selected to predict the monthly cases from 2025 to 2027.ResultsThe results showed five change points in the monthly brucellosis time series. The highest average number of reported brucellosis cases occurred between the fifth change point (January 2023) and the end of the series (December 2024). The ARIMA-SVR model outperformed both the ARIMA and SVR models in predicting brucellosis cases. It is noteworthy that the forecasted results indicate that brucellosis cases will remain at historically high levels over the next three years, with the peak months potentially shifting from June to May and July.ConclusionChange point analysis holds significant value in the field of epidemiology. The ARIMA-SVR model is suitable for predicting the incidence of brucellosis in Xinjiang, China. It is anticipated that the disease burden of brucellosis in Xinjiang will remain at a high level in the future, and local health authorities should continue to implement stringent targeted prevention and control measures. These research findings provide valuable insights for subsequent epidemiological studies and the development of a brucellosis early warning system.Supplementary InformationThe online version contains supplementary material available at 10.1186/s12889-025-24473-7.
- Research Article
- 10.1186/s13007-025-01441-1
- Oct 4, 2025
- Plant Methods
- Sajid Ullah + 5 more
The generation of a large amount of ground truth data is an essential bottleneck for the application of deep learning-based approaches to plant image analysis. In particular, the generation of accurately labeled images of various plant types at different developmental stages from multiple renderings is a laborious task that substantially extends the time required for AI model development and adaptation to new data. Here, generative adversarial networks (GANs) can potentially offer a solution by enabling widely automated synthesis of realistic images of plant and background structures. In this study, we present a two-stage GAN-based approach to generation of pairs of RGB and binary-segmented images of greenhouse-grown plant shoots. In the first stage, FastGAN is applied to augment original RGB images of greenhouse-grown plants using intensity and texture transformations. The augmented data were then employed as additional test sets for a Pix2Pix model trained on a limited set of 2D RGB images and their corresponding binary ground truth segmentation. This two-step approach was evaluated on unseen images of different greenhouse-grown plants. Our experimental results show that the accuracy of GAN predicted binary segmentation ranges between 0.88 and 0.95 in terms of the Dice coefficient. Among several loss functions tested, Sigmoid Loss enables the most efficient model convergence during the training achieving the highest average Dice Coefficient scores of 0.94 and 0.95 for Arabidopsis and maize images. This underscores the advantages of employing tailored loss functions for the optimization of model performance.Supplementary InformationThe online version contains supplementary material available at 10.1186/s13007-025-01441-1.
- Research Article
- 10.1016/j.media.2025.103676
- Oct 1, 2025
- Medical image analysis
- Shengdong Zhang + 6 more
LMS-Net: A learned Mumford-Shah network for binary few-shot medical image segmentation.
- Research Article
- 10.18287/2412-6179-co-1609
- Oct 1, 2025
- Computer Optics
- D.A Ilyukhin + 6 more
In this paper, we present an algorithm for binary segmentation of glioma C6 cells using deep learning methods to simplify and speed up the analysis of this culture growth. The first of its kind dataset containing 30 microscopic phase-contrast images of glioma C6 cells is collected to design and test the algorithm. We explore the influence of the encoder architecture in the neural network segmenter on the accuracy of glioma cell segmentation. Transfer learning approaches using the LIVECell dataset of microscopic images and the large ImageNet dataset of non-specialized images are used since the collected dataset contains a relatively small number of images. Experiments show that pre-training the neural network on LIVECell provides a significant advantage in low-resolution glioma cell recognition, with encoders trained on ImageNet providing better results at higher resolution. The paper proposes ways to improve the generalizing abilities of LIVECell weights to work at high resolution by applying augmentation. We demonstrate that using different starting weights allows us to obtain different generalization properties beyond the training set, which can be useful when detecting, or excluding from consideration, other cells in an image.
- Research Article
- 10.1080/03610918.2025.2565605
- Sep 23, 2025
- Communications in Statistics - Simulation and Computation
- Meenu Rani + 2 more
This article explores the application of the variance gamma distribution for detecting change points in financial data. Unlike conventional two-parameter and three-parameter distributions, the four-parameter variance gamma distribution uniquely captures skewed and heavy-tailed dynamics of daily financial returns. We conduct Monte Carlo simulations under the null hypothesis of no change in distribution parameters to find critical values of the likelihood ratio test and the modified information criterion procedures for change point detection. Simulation studies are also conducted to compare the effectiveness of these procedures. A power analysis is utilized to assess their comparative performance. Since power comparison results indicate the superiority of the modified information criterion over the likelihood ratio test, we work with the modified information criterion to detect change points in real-world datasets. We consider daily log returns of four stock indices: Nikkei 225, Hang Seng Index, OMX Stockholm 30, and NIFTY 50. Multiple change points are detected in these datasets using binary segmentation. These points are further analyzed to provide insights into shifts in financial market dynamics. Our findings highlight that the detected change points align with various macroeconomic shocks related to the 2020 pandemic, interest rates, crude oil prices, inflation rates, political uncertainty, and many others.
- Research Article
- 10.1007/s10278-025-01648-7
- Sep 11, 2025
- Journal of imaging informatics in medicine
- Amir M Vahdani + 4 more
Intraoperative tumor imaging is critical to achieving maximal safe resection during neurosurgery, especially for low-grade glioma resection. Given the convenience of ultrasound as an intraoperative imaging modality, but also the limitations of the ultrasound modality and the time-consuming process of manual tumor segmentation, we propose a learning-based model for the accurate segmentation of low-grade gliomas in ultrasound images. We developed a novel U-net-based architecture adopting the block architecture of the ConvNext V2 model, titled U-ConvNext, which also incorporates various architectural improvements including global response normalization, fine-tuned kernel sizes, and inception layers. We also adopted the CutMix data augmentation technique for semantic segmentation, aiming for enhanced texture detection. Conformal segmentation, a novel approach to conformal prediction for binary semantic segmentation, was also developed for uncertainty quantification, providing calibrated measures of model uncertainty in a visual format. The proposed models were trained and evaluated on three subsets of images in the RESECT dataset and achieved hold-out test Dice scores of 84.63%, 74.52%, and 90.82% on the "before," "during," and "after" subsets, respectively, which indicates increases of ~ 13-31% compared to the state of the art. Furthermore, external evaluation on the ReMIND dataset indicated a robust performance (dice score of 79.17% [95% CI: 77.82-81.62] and only a moderate decline of < 3% in expected calibration error. Our approach integrates various innovations in model design, model training, and uncertainty quantification, achieving improved results on the segmentation of low-grade glioma in ultrasound images during neurosurgery.