Abstract

BACKGROUNDLeft ventricular ejection fraction calculation automation typically requires complex algorithms and is dependent of optimal visualization and tracing of endocardial borders. This significantly limits usability in bedside clinical applications, where ultrasound automation is needed most.AIMTo create a simple deep learning (DL) regression-type algorithm to visually estimate left ventricular (LV) ejection fraction (EF) from a public database of actual patient echo examinations and compare results to echocardiography laboratory EF calculations. METHODSA simple DL architecture previously proven to perform well on ultrasound image analysis, VGG16, was utilized as a base architecture running within a long short term memory algorithm for sequential image (video) analysis. After obtaining permission to use the Stanford EchoNet-Dynamic database, researchers randomly removed approximately 15% of the approximately 10036 echo apical 4-chamber videos for later performance testing. All database echo examinations were read as part of comprehensive echocardiography study performance and were coupled with EF, end systolic and diastolic volumes, key frames and coordinates for LV endocardial tracing in csv file. To better reflect point-of-care ultrasound (POCUS) clinical settings and time pressure, the algorithm was trained on echo video correlated with calculated ejection fraction without incorporating additional volume, measurement and coordinate data. Seventy percent of the original data was used for algorithm training and 15% for validation during training. The previously randomly separated 15% (1263 echo videos) was used for algorithm performance testing after training completion. Given the inherent variability of echo EF measurement and field standards for evaluating algorithm accuracy, mean absolute error (MAE) and root mean square error (RMSE) calculations were made on algorithm EF results compared to Echo Lab calculated EF. Bland-Atlman calculation was also performed. MAE for skilled echocardiographers has been established to range from 4% to 5%. RESULTSThe DL algorithm visually estimated EF had a MAE of 8.08% (95%CI 7.60 to 8.55) suggesting good performance compared to highly skill humans. The RMSE was 11.98 and correlation of 0.348. CONCLUSIONThis experimental simplified DL algorithm showed promise and proved reasonably accurate at visually estimating LV EF from short real time echo video clips. Less burdensome than complex DL approaches used for EF calculation, such an approach may be more optimal for POCUS settings once improved upon by future research and development.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call