Abstract

Abstract Introduction/Objective Medical errors, liability and reduced productivity in Hematopathology labs stem from fatigue, stress and shortage. 5-35% of complete blood count specimens (at least 3 billion tests worldwide annually) compared to 100% of bone marrow aspirates require visual examination using microscopes or digital scanners. The deep learning technologies and growing digital scanners market empower for developing work-fit automated visual examination to assist the Hematopathologist. Methods/Case Report 2.6K+ manually smeared and stained bone marrow aspirate slides were scanned using integrated cameras on microscopes with electronic/mechanical control units. On top of the intracellular and background staining/dyeing variations, new scanning-related variations in exposure, resolution/focus and stitching were noted inter-patches and intra-patches. This led to consensus loss and "carry over" during cells identification. A convolutional autoencoder with skip connection for restoring/normalizing the stain and resolution was developed using data simulating the entire variations and artifacts. Generation of dimensions and ground truths was performed just by drawing on cells borders using pens, this enabled to localize and segment 5K+ cells per day with almost 100% accuracy. To help with labelling, intracellular colors split and quantification were applied. The processed 1.72+ million cells for 14 different WBCs classes/types were used to train a U-net architecture for segmentation, a transfer learning with pre-trained network EfficientNetB0 with ImageNet for classification and YOLO for detection. A previously developed pipeline for RBCs counting was used to select the proper areas for counting on smears. New diverse datasets were used for testing. Results were validated against confirmatory tests including immunohistochemistry and flow cytometry. Results (if a Case Study enter NA) The five-by-two cross validations revealed minor performance fluctuations (+/- 1.05%) with significant averages normalization SSIM, detection mAP, segmentation IoU, and classification F1-score exceeded 0.96, 0.97, 0.98 and 0.97, respectively. This beta prototype finds the proper areas for counting, calculates the M:E ratio, collects and counts the differential. Sufficient qualitative abnormalities assessment needs more work. Conclusion Without normalization and at least 100K cell images for each class, there is no generalizability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call