HighlightsThe practical significance of the work lies in the fact that the presented aortography data visualization system is an effective tool for visually assisting surgeons during transcatheter aortic valve implantation interventions, supporting real-time operation mode. The proposed data preprocessing algorithm, which improves image quality with minimal performance costs, complements the system, allowing specialists to achieve the best result. AbstractAim. The aim of this study is to develop a visual assistance system for transcatheter aortic valve implantation procedures.Methods. To address the stated objective, our own dataset consisting of 35 videos of the intervention was used. The visualization system is based on the approach of detecting key points in aortography, utilizing “Object detection” technology with the application of YOLO family artificial neural networks. To achieve the best result, we proposed a method to enhance the quality of input data using convolutional neural networks, specifically the «Autoencoder» technology.Results. The results of the study revealed that the convolutional autoencoder model is capable of restoring the informativeness of noisy input images from 40 to 75%, thereby increasing the accuracy of object detection in images. The presented real-time tracking system for facilitating TAVI procedures achieves a final accuracy of 51.9% according to the Mean Average Precision (MAP) quality metric. The visual assistance system can recognize and track key points indicating the location of the aortic root, delivery system, and heart valve prosthesis during surgery. The practical significance of the work lies in the fact that the presented aortography data visualization system is an effective tool for visually assisting surgeons during interventions, supporting real-time operation mode.Conclusion. The proposed data preprocessing algorithm, which improves image quality with minimal performance costs, complements the visualization system, allowing specialists to achieve the best results.
Read full abstract