Abstract

Abstract Introduction The growth of artificial intelligence (AI) use in echocardiography over the past years has been exponential, proposing new paths to overcome inter-operator variability and experience of the operator. Even though the applications of AI are still in their infancy within the field of echocardiography, the potential of AI implies future directions and is eager to assist for accuracy and efficiency of manual tracings. Deep learning, a subset of machine learning algorithms, is gaining popularity in echocardiography as a state of the art in visual data analysis. Purpose To evaluate deep learning for two initial tasks in automated cardiac measurements: view recognition and end-systolic (ES) and end-diastolic (ED) frame detection. Methods A total of 230 patients' (with various indications for study) 2D echocardiography data was used to train and validate neural networks. Raw pixel data was extracted from EPIQ 7G, Vivid E95 and Vivid 7 imaging platforms. Images were labeled according to their view: parasternal long axis (PLA), basal short axis, short axis at mitral level, apical two, three and four chambers (A4C). Additionally, ES and ED frames were labeled for A4C and PLA views. Images were de-identified by applying black pixel masks to non-anatomical data and removing metadata. Convolutional Neural Network (CNN) architecture was used for the classification of 6 different views. A total of 34752 and 3972 (5792 and 662 per view) frames were used to train and validate the network, respectively. Long-term Recurrent Convolutional Network (LRCN) combining temporal and spatial cognition was used for ES and ED frame detection. A total of 195 and 35 sequences with a length of 92 frames were used to train the LRCN. Results CNN for view classification had an AUC of 0.95 (sensitivity 95%, specificity 97%). Accuracy was lower for visually similar views, namely apical three-chamber and apical two-chamber. Training for ES and ED detection was achieved when training LRCN for regression instead of classification of each frame. LRCN for cardiac cycle evaluation had an average Framed Difference (aFD) of 2.31 (SD±2.15) for ED and 1.97 (SD±2.04) frames for ES detection which corresponds to error rate of about 0.04 s. Conclusion Determining echocardiographic view and evaluating cardiac cycle are the first steps in automating cardiac measurements. We have demonstrated the potential of two deep learning algorithms in accomplishing these tasks. Initial results are promising for the development of neural networks for cardiac segmentation and measuring of anatomical structures.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.