Abstract

Abstract Study question Can deep learning (DL) algorithms trained on time-lapse videos be used to detect and track the size and gender of pronuclei in developing human zygotes? Summary answer Our DL algorithm not only outperforms state-of-the-art models in detecting the pronuclei but can also accurately identify and track its gender and size over time. What is known already Recent researches have explored the use of DL to extract key morphological features of human embryos. Existing studies, however, focus either on blastocysts’ morphological measurements (Au et al. 2020) or on embryos’ general developmental stages classification (Gingold et al. 2018, Liu et al. 2019, Lau et al. 2019). So far, only one paper attempted to evaluate zygotes’ morphological components but stopped short of identifying the existence and location of their pronuclei (Leahy et al. 2020). We address this research gap by training a DL model that can detect, classify the gender, and quantify the size of zygotes’ pronuclei over time. Study design, size, duration A retrospective analysis using 91 fertilized oocytes from infertile patients undergoing IVF or ICSI treatment at Hanabusa Women’s Clinic between January 2011 and August 2019 was conducted. Each embryo was time-lapse monitored using Vitrolife which records an image every 15 minutes at 7 focal planes. For our study, we used videos of the first 1–2 days of the embryo from its 3 central focal planes, corresponding to 70–150 images per focal plane. Participants/materials, setting, methods All 273 timelapse videos were split into 30,387 grayscale still images at a 15-minute interval. Each image was checked and annotated by experienced embryologists where every pixel of the image was classified into 3 categories: male pronuclei, female pronuclei, and others. Images were converted into grayscale, resized into 500x500 pixels, and then fed into a neural network with the Mask R-CNN architecture and a ResNet101 backbone to produce a pronuclei instance segmentation model. Main results and the role of chance The 91 embryos were split into training (∼70% or 63 embryos) and validation (∼30% or 28 embryos). Our pronuclei model takes as input a single image and outputs a bounding box, mask, category, confidence score, and size measured in terms of pixel for each detected candidate. For prediction, we run the model on the 3 middle focal planes and merge candidates by using the one with the highest confidence score. We used the mean-average precision (mAP) score to evaluate our model’s ability to detect pronuclei and used the mean absolute percentage error (MAPE) between the actual size (as annotated by the embryologist) and the predicted one to check the model’s performance in tracking the pronuclei’s size. The mAP for detecting pronuclei, regardless of its gender, achieved by our model was 0.698, higher than the 0.680 value reported in the Leahy et al. paper (2020). Breakdown by gender, our model’s mAP for male and female pronuclei are 0.734 and 0.661 respectively. The overall MAPE for tracking pronuclei’s size is 21.8%. Breakdown by gender, our model’s MAPE for male and female pronuclei are 19.4% and 24.3% respectively. Limitations, reasons for caution Samples were collected from one clinic with videos recorded from one time-lapse system which can limit our results’ reproducibility. The accuracy of our DL model is also limited by the small number of embryos that we used. Wider implications of the findings: Even with a limited training dataset, our results indicate that we can accurately detect and track the gender and the size of zygotes’ pronuclei using time-lapse videos. In future models, we will increase our training dataset as well as include other time-lapse systems to improve our models’ accuracy and reproducibility. Trial registration number Not applicable

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.