Abstract Study question Can artificial intelligence (AI) effectively assess various regions of embryos to estimate the quality of human embryos captured through an optical microscope? Summary answer Artificial Intelligence excels in analyzing diverse embryonic regions based on morphology, ensuring precise grade predictions, assessing human embryos via optical microscopes. What is known already In recent research, the integration of AI and computer vision has marked significant strides in embryo selection, providing precise predictions of clinical pregnancy based on images of human embryos, particularly at Day 5. This study extends its findings by incorporating techniques that exhibit resilience to variations in camera and microscope types. Notably, the AI algorithms underwent rigorous testing on double-blind datasets, comprising optical microscopic images sourced from Optical microscope systems across diverse regions such as India, Europe, the UK, the US, and Australia. These results signify a promising advancement in the standardization and applicability of AI-assisted embryo assessment methodologies. Study design, size, duration Approximately 11824 static 2D images of Day 5 blastocysts with related pregnancy, preimplantation genetic testing for aneuploidy (PGT-A) outcomes, demographic and clinic geographical location information have been collected. Images were divided into three groups: training, validation and blind test sets. An AI model was trained on 3473 embryo images, validated on 1184 and tested on further blind set of 1253 images from a separate clinics and demographsic. Gardner based grading system was used. Participants/materials, setting, methods Dataset included optical-microscope images from fertility patients across 25 IVF laboratories in six countries was utilized to train, validate, and test the AI. Dataset was graded by 9 embryologists and final grade was decided based on max-voting. The identity of participants was kept secret and did not associate with used data. Assessment encompassed accuracy, distribution, and robustness concerning camera/microscope type. The dataset spanned various embryo development stages, excluding frozen, degenerated, non-blastocyst embryos. Main results and the role of chance The study distinguishes itself by incorporating data from diverse environments, enhancing generalizability. Leveraging transformer-based attention models ensures meticulous focus on regions relevant to the Gardner grading system, ensuring accurate predictions akin to human judgment and minimizing errors prone to subjectivity. AI’s predictive accuracy on blind dataset for expansion is 82.18%, for ICM is 76.53%, and for trophectoderm is 77.53%, surpassing previous studies. The model demonstrates specificity 79.16%, sensitivity of 85.95% and accuracy of 82.44% in estimating embryo quality. Direct application on embryoscope images is hindered by differing morphology, excluding the embryo part. However, employing segmentation and applying models yields comparable performance with approx. 2.87% drop in accuracy. These findings suggest that the proposed method, with AI trained on a globally diverse dataset, establishes a generalizable AI robust to variations in camera type and focal settings. Limitations, reasons for caution Data collected for this solution has been limited to mainly static images. The study’s focus on good-quality Day 5 blastocyst embryos might limit generalizability to other stages. The dataset’s global diversity may not fully capture demographic nuances. Ongoing AI advancements may necessitate continual updates for relevance and accuracy. Wider implications of the findings Broader implications of the findings suggests transformative potential for AI in embryo quality assessment. The methodology’s success in diverse environments and robustness to variations in camera types underscore its applicability on global scale. This could lead to standardized, automated, and precise embryo evaluations, offering significant advancements in assisted reproductive technology. Trial registration number not applicable
Read full abstract