Abstract

Abstract Study question Can an Artificial Intelligence (AI) system based on a deep learning algorithm analyze time-lapse videos for ploidy status prediction? Summary answer Our spatiotemporal model can distinguish aneuploid embryos from euploid embryos using time-lapse videos from 10 to 115 hours post-insemination (hpi) with an accuracy of 71,28%. What is known already As the maternal age advances chances of aneuploidy in oocyte also increases and there is a high chance of early termination of pregnancy. Pre-implantation genetic testing for aneuploidy (PGT-A) is a reliable tool for detecting chromosomal status. However, PGT-A is an invasive technique in which protocol requires an embryo biopsy. Continuous monitoring of embryo development led to AI models for the prediction of ploidy based on blastocyst images or morphokinetic parameters. Previous publications showed that euploid embryos reach blastulation earlier than non-euploid embryos. This is the first attempt to predict ploidy by analyzing continuous embryo development through captured time-lapse images. Study design, size, duration The present study consisted of a single-center retrospective analysis for the evaluation of ploidy status with a non-invasive method. We developed our models based on a balanced dataset of 940 videos (from 10 to 115 hpi) extracted from the EmbryoScope time-lapse system. All the videos were divided into 90% for training and validating and 10% for testing. The target class for the predicted models was the results of PGT-A on blastocyst by next-generation sequencing. Participants/materials, setting, methods We used an end-to-end approach to develop an automated AI system capable of extracting features from images and classifying them considering temporal dependencies. First, a convolutional neural network (CNN) extracted the most relevant features from each frame. We used a deep architecture known as ResNet50. Second, a bidirectional long short-term memory (LSTM) layer received this information and analyzed temporal dependencies, obtaining a low-dimensional feature vector that defined each video. Finally, a multilayer perceptron classified them. Main results and the role of chance Euploid and aneuploid precision was 69% and 75%, respectively. Euploid and aneuploid sensitivity was 79% and 64%, respectively. Euploid and aneuploid F1 score was 73% and 69% respectively. The global accuracy associated with our spatiotemporal model to differentiate between the two classes achieved 71,28% on this dataset. Additionally, we trained models with external information such as maternal age (38,3±3,9 versus 39,1±3,1), but the performance did not improve. Note that we did not apply a prior selection of good quality videos to study more reliably the possible inclusion of an AI model for chromosomal status analysis in clinical practice. Limitations, reasons for caution The main limitation of this study is the single-center retrospective approach and the reduced size of the database, therefore future prospective research would improve model performance. However, the preliminary results showed the high potential of the methods. Wider implications of the findings Our results showed potential automation of chromosomic status evaluation. Our findings led to a possible non-invasive method and the research of new unknown key factors for determining ploidy. Further studies with a large number of time-lapse videos could result in a potential translation to clinical use. Trial registration number not applicable

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call