Abstract

BackgroundIn corneal lacerations, the absence of high-order image features as biomarkers to guide surgical strategy is a limiting factor. The absence of multimodal data restricts the development of automated reconstruction designs for corneal laceration. The present study is aimed at training and optimizing the model based on high-order features from corneal laceration images and real suture samples and completing the intelligent promotion of whole corneal laceration suture auxiliary decision-making with the two-step method of automatic wound identification and stitch position prediction. MethodsBased on the images of isolated corneal wound samples, a fully supervised U-Net learning method and consistent regular semisupervised learning method based on the mean-teacher model were used to identify the wounds. The DDice coefficient was used to evaluate the segmentation and recognition effect. Traditional image processing technology was used to predict the needle entry and exit points of wound sutures based on medical suture principles. The prediction effect was evaluated by viewpoint similarity. ResultsAfter training the wound recognition model based on 2400 corneal images and corresponding incision labels, the DDice coefficients of supervised U-Net with or without postprocessing results were 0.902 and 0.817, respectively. The Dice coefficients of the semisupervisedmean-teacher model with or without postprocessing were 0.921 and 0.843, respectively. The key point similarity of wound stitch position prediction was 0.872 ± 0.021. ConclusionThis new automated method for corneal laceration identification and stitch position generation based on novel biomarkers and multimodal data is expected to assist doctors treating corneal lacerations to quickly formulate a primary suturing strategy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call