Abstract

This paper investigates whether deep learning architectures for semantic segmentation are capable of supporting geneticists in karyotype exporting, in a more efficient manner without requiring the intervention of humans. For the sake of experiments, 62 images from the BioImLab segmentation dataset have been adopted that contain chromosomes, nucleotides, and some unknown objects. All regions of interest had been annotated manually with an emphasis on the overlapping areas between chromosomes. For this purpose, we created 10 synthetic folds, using the Holdout Cross Validation between 10 selected targeted microscope images containing all classes. The newly designed dataset is used to train 5 deep learning CNN with pretrained weights using the transfer learning technique, in order to highlight the strengths and the weaknesses of each architecture in the segmentation of “Overlapping” regions. In terms of evaluation, the metric of IoU (intersection over union) is used, which is widely used and approved in cases of the existence of overlapping between objects. The best result was 66.67% IoU in the case of Vgg19 model combined with U-Net achieving 57.1% mean IoU. The future prospects of this study are to assist the cytogeneticists to (a) remove the objects of no interest from the microscope image, (b) evaluate the suitability of the microscopic images for karyotyping, and (c) automate the karyotyping process.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.