LG-NuSegHop: A Local-to-global Self-supervised Pipeline for Nuclei Instance Segmentation

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

LG-NuSegHop: A Local-to-global Self-supervised Pipeline for Nuclei Instance Segmentation

Similar Papers
  • Dissertation
  • 10.17760/d20439211
ARID
  • Aug 24, 2022
  • Rajwinder Singh

Instance segmentation algorithms are used everywhere, be it self driving cars, scene mapping by autonomous robots or analyzing medical scans. Instance segmentation can be thought of as further refinement of semantic segmentation. Object detection algorithms try to detect objects from the scene by enclosing them in bounding boxes, semantic segmentation tries to label these objects, whereas instance segmentation tries to label each unique instance of these objects. The task is quite complex and becomes even more challenging when the scope is microscopic data. Objects in microscopic data do not usually follow a fixed shape or orientation, therefore it becomes very difficult to identify unique instances of these objects using axis aligned bounding boxes. The alternative approach that researchers take is to do pixel wise prediction and then agglomerate those together to ultimately get the final object instances. In this thesis we presented a novel loss function which we have used to train a U-Net which predicts n-dimensional embedding maps or ARID(Affinity Representing Instance Descriptors). These embedding vectors contain dense information which can then be used to generate segmentation maps using the post processing approaches. Previous methods have attempted to learn affinities but are prone to errors resulting in erroneous segmentation. We show that our segmentation pipeline using ARID embedding map surpasses the performance of the affinity based networks and solve the problem of merge errors. Our segmentation pipeline have two phases, first one is predicting ARID embedding for which we have trained U-Net architecture using ultrametric loss. Multiple configurations were tested and compared. Second phase is post processing. Post processing is further divided in two steps segmentation generation and refinement. We presented a very basic technique to generate a euclidean minimum spanning tree and prune the edges with distance bigger than the provided threshold to generate segmentation. The other part of the post processing pipeline is segmentation refinement. Where we proposed approaches to refine the generated segmentation. We have used IOU scores under thresholds of Average Precision(AP) raging from 0.5 to 0.95 with an increment of 0.05 to evaluate the performance. The best average AP0.5 IOU score that we got from the affinity based networks is 0.63, we have shown that our segmentation pipeline generates the segmentation maps which gives the best average performance of 0.826 AP0.5 IOU score, surpassing the affinity based network performance. We have also shown the failure modes of our proposed loss function and presented future scope of research in the field. Embedding based approaches show promise to do efficient instance segmentation especially in complex scenes as is in the microscopic data. The generalized loss function that we have presented in this thesis is capable of doing this task, and presents a better alternative to using affinity based methods to do segmentation.--Author's abstract

  • Research Article
  • Cite Count Icon 1
  • 10.2478/cait-2025-0022
Unification of Semantic and Instance Segmentation with BoundaryX
  • Sep 1, 2025
  • Cybernetics and Information Technologies
  • Teodor Boyadzhiev + 1 more

Semantic segmentation is a field of image content recognition in which each pixel is classified according to the type of object it belongs to, while instance segmentation distinguishes individual object instances. A novel method, BoundaryX, is proposed to unify both tasks without relying on bounding boxes. Each pixel is classified, and boundaries are drawn around separate instances, enabling easy bounding box calculation without shape constraints or region proposals. Both instanced objects (like people) and non-instanced ones (like the sky) are handled by BoundaryX, without hardcoded exceptions. The quality of the method was evaluated on the COCO dataset for the class “people” by measuring Intersection over Union (IoU) for the semantic segmentation and bounding boxes recall and precision. The method achieved 0.774 IoU for semantic segmentation, 75% recall, and 83% precision for bounding box quality. Segmentation pipelines are simplified through the unified solution and flexible boundary-based representation provided by BoundaryX.

  • Conference Article
  • Cite Count Icon 32
  • 10.1109/wacv48630.2021.00404
Style Consistent Image Generation for Nuclei Instance Segmentation
  • Jan 1, 2021
  • Xuan Gong + 3 more

In medical image analysis, one limitation of the application of machine learning is the insufficient amount of data with detailed annotation, due primarily to high cost. Another impediment is the domain gap observed between images from different organs and different collections. The differences are even more challenging for the nuclei instance segmentation, where images have significant nuclei stain distribution variations and complex pleomorphisms (sizes and shapes). In this work, we generate style consistent histopathology images for nuclei instance segmentation. We set up a novel instance segmentation framework that integrates a generator and discriminator into the segmentation pipeline with adversarial training to generalize nuclei instances and texture patterns. A segmentation net detects and segments both real nuclei and synthetic nuclei and provides feedback so that the generator can synthesize images that can boost the segmentation performance. Experimental results on three public nuclei datasets indicate that our proposed method outperforms previous nuclei segmentation methods.

  • Book Chapter
  • Cite Count Icon 1
  • 10.1007/978-3-031-17024-9_7
Towards Improving Bio-Image Segmentation Quality Through Ensemble Post-processing of Deep Learning and Classical 3D Segmentation Pipelines
  • Oct 20, 2022
  • Anuradha Kar

In biological image analysis, 3D instance segmentation is a crucial step towards extracting information on objects of interest from microscopy datasets. Existing instance segmentation pipelines are frequently affected by errors such as missing boundary layer cells or poorly segmented regions. In this study, we propose several ensembles as post-processing methods for improving the quality of outputs obtained from deep learning and classical 3D segmentation pipelines. These methods take as input the results from two independent 3D segmentation pipelines and combine them using different fusion algorithms. The first algorithm uses label set intersection, the second one involves adjacency graph composition and the third one works through segmented object boundary fusion followed by 3D watershed. These 3 algorithms are tested on a dataset of 3D confocal microscopy images of floral tissues. The third fusion algorithm is found to perform best and has better global and local accuracies compared to its input segmentations. The specialty of the proposed ensemble methods is that these are model agnostic, i.e., they can be used to combine segmentation results from deep learning as well as non-deep learning or classical pipelines. These methods could be highly beneficial in correcting segmentation errors arising from missing cells in the boundary layer or under segmentation in the inner tissue layers and ultimately provide us robust segmentation results in presence of variable image qualities in biological datasets.

  • Research Article
  • Cite Count Icon 18
  • 10.1242/dev.202817
Nuclear instance segmentation and tracking for preimplantation mouse embryos
  • Nov 1, 2024
  • Development (Cambridge, England)
  • Hayden Nunley + 16 more

ABSTRACTFor investigations into fate specification and morphogenesis in time-lapse images of preimplantation embryos, automated 3D instance segmentation and tracking of nuclei are invaluable. Low signal-to-noise ratio, high voxel anisotropy, high nuclear density, and variable nuclear shapes can limit the performance of segmentation methods, while tracking is complicated by cell divisions, low frame rates, and sample movements. Supervised machine learning approaches can radically improve segmentation accuracy and enable easier tracking, but they often require large amounts of annotated 3D data. Here, we first report a previously unreported mouse line expressing near-infrared nuclear reporter H2B-miRFP720. We then generate a dataset (termed BlastoSPIM) of 3D images of H2B-miRFP720-expressing embryos with ground truth for nuclear instances. Using BlastoSPIM, we benchmark seven convolutional neural networks and identify Stardist-3D as the most accurate instance segmentation method. With our BlastoSPIM-trained Stardist-3D models, we construct a complete pipeline for nuclear instance segmentation and lineage tracking from the eight-cell stage to the end of preimplantation development (>100 nuclei). Finally, we demonstrate the usefulness of BlastoSPIM as pre-train data for related problems, both for a different imaging modality and for different model systems.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/bibm52615.2021.9669432
Automated Nanoparticle Count via Modified BlendMask Instance Segmentation on SEM Images
  • Dec 9, 2021
  • Linpeng Lv + 6 more

In the high-throughput drug research, statistical analysis of nanoparticles has been one of the focuses in drug carrier systems. This can be completed via electronic microscopy imaging and image analysis such as image segmentation. For example, in some cases selecting and counting the nanoparticles in the field of view are important for drug screening. In order to minimize manual interactions and avoid extensive workloads, we present a pipeline that is featured by deep learning-based instance segmentation, with experiments implemented on both real and synthetic data. The proposed instance segmentation approach, namely Modified BlendMask, aims to improve the accuracy of nanoparticle detection and further refine the automated nanoparticle count. In this framework, we are devoted to address the problem of missing detection, i.e., false negative, introduced by overlap and blur, sparse particle distribution, and tiny particle occurrence in images. Sufficient experiments demonstrate the reasonableness and effectiveness of the proposed pipeline and instance segmentation method for automated nanoparticle count, with an overall count accuracy of 70.2% for 38670 particles on the 1141 test images.

  • Research Article
  • Cite Count Icon 10
  • 10.1016/j.ecoinf.2022.101794
Instance segmentation and tracking of animals in wildlife videos: SWIFT - segmentation with filtering of tracklets
  • Sep 1, 2022
  • Ecological Informatics
  • Frank Schindler + 1 more

Instance segmentation and tracking of animals in wildlife videos: SWIFT - segmentation with filtering of tracklets

  • Research Article
  • Cite Count Icon 21
  • 10.1186/s12859-021-04037-3
InstantDL: an easy-to-use deep learning pipeline for image segmentation and classification
  • Mar 2, 2021
  • BMC Bioinformatics
  • Dominik Jens Elias Waibel + 2 more

BackgroundDeep learning contributes to uncovering molecular and cellular processes with highly performant algorithms. Convolutional neural networks have become the state-of-the-art tool to provide accurate and fast image data processing. However, published algorithms mostly solve only one specific problem and they typically require a considerable coding effort and machine learning background for their application.ResultsWe have thus developed InstantDL, a deep learning pipeline for four common image processing tasks: semantic segmentation, instance segmentation, pixel-wise regression and classification. InstantDL enables researchers with a basic computational background to apply debugged and benchmarked state-of-the-art deep learning algorithms to their own data with minimal effort. To make the pipeline robust, we have automated and standardized workflows and extensively tested it in different scenarios. Moreover, it allows assessing the uncertainty of predictions. We have benchmarked InstantDL on seven publicly available datasets achieving competitive performance without any parameter tuning. For customization of the pipeline to specific tasks, all code is easily accessible and well documented.ConclusionsWith InstantDL, we hope to empower biomedical researchers to conduct reproducible image processing with a convenient and easy-to-use pipeline.

  • Conference Article
  • Cite Count Icon 365
  • 10.1109/cvpr42600.2020.00856
Deep Snake for Real-Time Instance Segmentation
  • Jun 1, 2020
  • Sida Peng + 5 more

This paper introduces a novel contour-based approach named deep snake for real-time instance segmentation. Unlike some recent methods that directly regress the coordinates of the object boundary points from an image, deep snake uses a neural network to iteratively deform an initial contour to match the object boundary, which implements the classic idea of snake algorithms with a learning-based approach. For structured feature learning on the contour, we propose to use circular convolution in deep snake, which better exploits the cycle-graph structure of a contour compared against generic graph convolution. Based on deep snake, we develop a two-stage pipeline for instance segmentation: initial contour proposal and contour deformation, which can handle errors in object localization. Experiments show that the proposed approach achieves competitive performances on the Cityscapes, KINS, SBD and COCO datasets while being efficient for real-time applications with a speed of 32.3 fps for 512 x 512 images on a 1080Ti GPU. The code is available at https://github.com/zju3dv/snake/.

  • Research Article
  • Cite Count Icon 1
  • 10.1158/1538-7445.am2022-464
Abstract 464: AI-powered segmentation and analysis of nuclei morphology predicts genomic and clinical markers in multiple cancer types
  • Jun 15, 2022
  • Cancer Research
  • John Abel + 15 more

Morphological features of cancer cell nuclei are linked to gene expression signatures and genomic alterations. In addition, pathologists have leveraged nuclear morphology as diagnostic and prognostic markers. To enable the use of nuclear morphology in digital pathology, we developed a pan-tissue, deep-learning-based digital pathology pipeline for exhaustive nucleus detection, instance segmentation, and classification. We collected > 29,000 manual nucleus annotations from hematoxylin and eosin (H&E)-stained pathology images from 21 tumor types at 40x and 20x magnification from The Cancer Genome Atlas (TCGA) project, as well as a proprietary set of H&E-stained tissue biopsies of skin, liver non-alcoholic steatohepatitis (NASH), colon inflammatory bowel disease (IBD), and kidney lupus. Annotations were used to train an object detection and segmentation model for identifying nuclei. Application of the model to held-out test data, including held-out tissue types, demonstrated performance comparable to state-of-the-art models described in the literature (mean Dice score = 0.80, aggregated Jaccard index = 0.60). We deployed our model to segment nuclei in H&E slides from the breast cancer (BRCA, N = 941) and prostate adenocarcinoma (PRAD, N = 457) TCGA cohorts. We extracted interpretable features describing the shape (circularity, eccentricity), size, staining intensity (mean and standard deviation), and texture of each nucleus. Nuclei were assigned as cancer or other cell types using separately trained convolutional neural networks for BRCA and PRAD. We used the mean and standard deviation of each feature sampled from a random subset of cancer nuclei to summarize the nuclear morphology on each slide (mean (range) = 10,068 (5,981-10,452) cancer cells from each BRCA slide; mean (range) = 10,053 (5,029-10,495) cancer cells from each PRAD slide). We used nuclear features to construct random forest classification models for predicting markers of genomic instability and prognosis: whole-genome doubling (WGD) and homologous recombination deficiency (HRD) status separately in BRCA and PRAD, HER2 subtype in BRCA, and Gleason grade in PRAD. Nuclear features were predictive of WGD (area under the receiver operating characteristic curve (AUROC) = 0.78 BRCA, = 0.69 PRAD) and binarized HRD status (AUROC = 0.65 BRCA, = 0.68 PRAD) on held-out test sets. Nuclear features were predictive of HER2-enriched breast cancer vs. other molecular subtypes (AUROC = 0.72), and distinguished between low risk (6) and moderate/high risk (7-10) Gleason grade in PRAD (AUROC = 0.72). In summary, we present a powerful pan-tissue approach for nucleus segmentation and featurization, which enables the construction of predictive models and the identification of features linking nuclear morphology with clinically-relevant prognostic biomarkers across multiple cancer types. Citation Format: John Abel, Suyog Jain, Deepta Rajan, Ken Leidal, Harshith Padigela, Aaditya Prakash, Jake Conway, Michael Nercessian, Christian Kirkup, Robert Egger, Ben Trotter, Andrew Beck, Ilan Wapinski, Michael G. Drage, Limin Yu, Amaro Taylor-Weiner. AI-powered segmentation and analysis of nuclei morphology predicts genomic and clinical markers in multiple cancer types [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 464.

  • Research Article
  • Cite Count Icon 29
  • 10.3389/fpls.2023.1109314
DFSP: A fast and automatic distance field-based stem-leaf segmentation pipeline for point cloud of maize shoot.
  • Jan 31, 2023
  • Frontiers in Plant Science
  • Dabao Wang + 8 more

The 3D point cloud data are used to analyze plant morphological structure. Organ segmentation of a single plant can be directly used to determine the accuracy and reliability of organ-level phenotypic estimation in a point-cloud study. However, it is difficult to achieve a high-precision, automatic, and fast plant point cloud segmentation. Besides, a few methods can easily integrate the global structural features and local morphological features of point clouds relatively at a reduced cost. In this paper, a distance field-based segmentation pipeline (DFSP) which could code the global spatial structure and local connection of a plant was developed to realize rapid organ location and segmentation. The terminal point clouds of different plant organs were first extracted via DFSP during the stem-leaf segmentation, followed by the identification of the low-end point cloud of maize stem based on the local geometric features. The regional growth was then combined to obtain a stem point cloud. Finally, the instance segmentation of the leaf point cloud was realized using DFSP. The segmentation method was tested on 420 maize and compared with the manually obtained ground truth. Notably, DFSP had an average processing time of 1.52 s for about 15,000 points of maize plant data. The mean precision, recall, and micro F1 score of the DFSP segmentation algorithm were 0.905, 0.899, and 0.902, respectively. These findings suggest that DFSP can accurately, rapidly, and automatically achieve maize stem-leaf segmentation tasks and could be effective in maize phenotype research. The source code can be found at https://github.com/syau-miao/DFSP.git.

  • Research Article
  • Cite Count Icon 12
  • 10.34133/plantphenomics.0179
AppleQSM: Geometry-Based 3D Characterization of Apple Tree Architecture in Orchards.
  • Jan 1, 2024
  • Plant Phenomics
  • Tian Qiu + 8 more

The architecture of apple trees plays a pivotal role in shaping their growth and fruit-bearing potential, forming the foundation for precision apple management. Traditionally, 2D imaging technologies were employed to delineate the architectural traits of apple trees, but their accuracy was hampered by occlusion and perspective ambiguities. This study aimed to surmount these constraints by devising a 3D geometry-based processing pipeline for apple tree structure segmentation and architectural trait characterization, utilizing point clouds collected by a terrestrial laser scanner (TLS). The pipeline consisted of four modules: (a) data preprocessing module, (b) tree instance segmentation module, (c) tree structure segmentation module, and (d) architectural trait extraction module. The developed pipeline was used to analyze 84 trees of two representative apple cultivars, characterizing architectural traits such as tree height, trunk diameter, branch count, branch diameter, and branch angle. Experimental results indicated that the established pipeline attained an R2 of 0.92 and 0.83, and a mean absolute error (MAE) of 6.1cm and 4.71mm for tree height and trunk diameter at the tree level, respectively. Additionally, at the branch level, it achieved an R2 of 0.77 and 0.69, and a MAE of 6.86mm and 7.48° for branch diameter and angle, respectively. The accurate measurement of these architectural traits can enable precision management in high-density apple orchards and bolster phenotyping endeavors in breeding programs. Moreover, bottlenecks of 3D tree characterization in general were comprehensively analyzed to reveal future development.

  • Conference Article
  • 10.5121/csit.2025.151921
AUTOMATED MORPHOLOGICAL ANALYSIS OF NEURONS IN FLUORESCENCE MICROSCOPY USING YOLOV8
  • Oct 18, 2025
  • Banan Alnemri + 1 more

Accurate segmentation and precise morphological analysis of neuronal cells in fluorescence microscopy images are crucial steps in neuroscience and biomedical imaging applications. However, this process is labor-intensive and time-consuming, requiring significant manual effort and expertise to ensure reliable outcomes. This work presents a pipeline for neuron instance segmentation and measurement based on a high-resolution dataset of stem-cell-derived neurons. The proposed method uses YOLOv8, trained on manually annotated microscopy images. The model achieved high segmentation accuracy, exceeding 97%. In addition, the pipeline utilized both ground truth and predicted masks to extract biologically significant features, including cell length, width, area, and grayscale intensity values. The overall accuracy of the extracted morphological measurements reached 75.32%, further supporting the effectiveness of the proposed approach. This integrated framework offers a valuable tool for automated analysis in cell imaging and neuroscience research, reducing the need for manual annotation and enabling scalable, precise quantification of neuron morphology.

  • Research Article
  • Cite Count Icon 21
  • 10.3390/s23073576
Mushroom Detection and Three Dimensional Pose Estimation from Multi-View Point Clouds
  • Mar 29, 2023
  • Sensors (Basel, Switzerland)
  • George Retsinas + 3 more

Agricultural robotics is an up and coming field which deals with the development of robotic systems able to tackle a multitude of agricultural tasks efficiently. The case of interest, in this work, is mushroom collection in industrial mushroom farms. Developing such a robot, able to select and out-root a mushroom, requires delicate actions that can only be conducted if a well-performing perception module exists. Specifically, one should accurately detect the 3D pose of a mushroom in order to facilitate the smooth operation of the robotic system. In this work, we develop a vision module for 3D pose estimation of mushrooms from multi-view point clouds using multiple RealSense active–stereo cameras. The main challenge is the lack of annotation data, since 3D annotation is practically infeasible on a large scale. To address this, we developed a novel pipeline for mushroom instance segmentation and template matching, where a 3D model of a mushroom is the only data available. We evaluated, quantitatively, our approach over a synthetic dataset of mushroom scenes, and we, further, validated, qualitatively, the effectiveness of our method over a set of real data, collected by different vision settings.

  • Research Article
  • Cite Count Icon 37
  • 10.1016/j.ascom.2020.100420
Mask galaxy: Morphological segmentation of galaxies
  • Aug 28, 2020
  • Astronomy and Computing
  • H Farias + 4 more

Mask galaxy: Morphological segmentation of galaxies

Save Icon
Up Arrow
Open/Close