Abstract

Real time ‘AI’ applications utilizing point-of-care ultrasound are often commercialized, expensive and do not work with existing ultrasound machines and workflows, hampering the reach of effective computer vision techniques to assist at the bedside for diagnostic and teaching purposes. The purpose of this study is to evaluate whether formal transthoracic echocardiogram studies can be used for transfer learning on ED echocardiogram views and to build a portable model for real-time cardiac function categorization at the bedside that works with existing ultrasound equipment. The ultimate goal of the project is to create a portable hardware and software solution that works with existing ED ultrasound machines and workflows. A previously described three-dimensional convolutional neural network (CNN), trained on 10, 030 formal transthoracic echocardiogram apical 4-chamber (A4C), was adapted for use in the ED. Expert sonographer tracings were used through weak supervision to generate frame-level semantic segmentations of the left ventricle to output video of cardiac function over time. 1123 ED A4C videos from July 2020 to December 2020 were intermixed by labeling them into four categories (‘normal,’ ‘slightly reduced,’ ‘moderately reduced,’ ‘severely reduced’) by board certified emergency physicians and consensus labeling was done by an ultrasound division faculty member. Given variability of ED A4C videos and to assist with CNN training, an interpretability label (‘yes,’ ‘partial,’ ‘no’) was also obtained corresponding to all, some, and none of the beats complete and clinically interpretable. These labeled videos were intermixed with the formal echocardiogram video set and the CNN was retrained to provide binary output (normal vs hypo- contractile) and segmented video of left ventricular cardiac function over time. Receiver operating characteristic curves (ROC) were generated to assess diagnostic ability. Finally, the video output port of an ultrasound machine was used to generate model outputs (label and segmented video) available to providers in near real-time (<0.2 seconds to create model output) using inexpensive hardware at the bedside. An initial set of 133 A4C videos were evaluated by the model before any retraining was done with a ROC of 0.81 (95% CI: 0.77 - 0.85) for a binary classifier. After intermixing the remaining ED videos and retraining the CNN, an improved ROC of 0.91 (95% CI: 0.89 - 0.94) was observed for a binary classifier. For test set A4C videos that the model deemed interpretable, segmented videos accurately demonstrated contractility by shading left ventricular cardiac function. A three-dimensional CNN trained on a cardiology echocardiogram dataset can be retrained using ED A4C videos to provide high discriminatory value and be adapted for existing ED ultrasound machines and workflows providing an inexpensive tool for real-time feedback at the bedside. Along with categorical labeling, segmented video output can help with uptake of and understanding model output, potentially lowering the barrier to entry to ED echocardiography for providers lacking formal ultrasound training and providing guidance to trainees.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.