Abstract

A major hindrance to point-of-care ultrasound (POCUS) deep learning (DL) algorithm development is lack of large publicly available image databases, like ones found in traditional imaging. Our objective was to test the potential of using only apical 4-chamber (A4C) images from a large publically available dataset to train a DL algorithm to visually estimate left ventricle (LV) Ejection Fraction (EF) from a parasternal long axis (PSL) window. Researchers embedded a VGG16 convolutional neural network inside a long short term memory (LSTM) algorithm for ultrasound video analysis. We obtained access to the Stanford EchoNet-Dynamic database of approximately 10, 000 echo A4C videos. All echo examinations were read during comprehensive echocardiography examination and included calculated EF results. Ninety percent of the data were used for algorithm training and 10% for validation during training. The LV in A4C images take on a similar appearance to that in PSL images when flipped horizontally, across the vertical axis, and rotated 90o counter clockwise (CCW) and could potentially be used to simulate PSL data. As part of DL algorithm training, researchers tested 3 training options by applying image manipulation to Stanford A4C echo videos. Tested were: training the DL algorithm only on unaltered Stanford A4C videos, training on unaltered videos and those rotated 90o CCW and finally training on unaltered then rotated 90o CCW and then horizontally flipped A4C videos. As a real-world test, we obtained 569 echo examinations from a different medical center (UCLA) showing PSL window videos from comprehensive echo examinations, with respective EF, and tested the DL algorithm variants’ performance on these actual PSL videos. We calculated mean absolute error (MAE) on algorithm EF results per field standards for evaluating algorithm accuracy compared to Echo Lab calculated EF and Bland-Altman analyses. MAE for skilled echo techs ranges from 4 to 5%. Prior algorithm training with Stanford A4C videos and testing only on Stanford A4C videos achieved an MAE of 8.08% (95% CI 7.60 to 8.55). In this study, the DL algorithm trained only on unaltered A4C videos and tested on unrelated PSL achieved MAE of 27.03% (95% CI 25.59 to 28.46) for visual EF estimation. Training on unaltered A4C videos and then on 90o CCW rotated videos achieved an MAE of 17.41 (95% CI 16.26 to 18.19). Training on unaltered, then 90o CCW rotated and then horizontally flipped videos achieved an MAE of 16.18 (95% CI 15.21 to 17.14). The Bland-Atlman plots showed the vast majority of points falling within the 95% CI for the third and best training iteration. Our study results indicate the potential for POCUS AI researchers to use non-POCUS image data in algorithm development by adapting it for training purposes via video rotation and manipulation designed to simulate the desired imaging window. This may be important for future POCUS algorithm development which may benefit from adaptation of traditional imager data, and help overcome the paucity of POCUS databases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call