Abstract

Echocardiography is a challenging sonographic examination with high user-dependence and the need for significant training and experience. To improve the use of ultrasound in emergency management, especially by non-expert users, we propose a solely image-based machine-learning algorithm that does not rely on any external tracking devices. This algorithm guides the motion of the probe towards clinically relevant views, such as an apical four-chamber or long axis parasternal view, using a multi-task deep convolutional neural network (CNN). This network was trained on 27 human subjects using a multi-task learning paradigm to: (a) detect and exclude ultrasound frames where quality is not sufficient for the guidance, (b) identify one of three typical imaging windows, including the apical, parasternal, and subcostal to guide the user through the exam workflow, and (c) predict 6-DOF motion of the transducer towards a target view i.e. rotational and translational motion. And besides that, by deploying relatively lightweight architecture we ensured the operation of the algorithm at approximately 25 frames per second on a commercially available mobile device. Evaluation of the system on three unseen human subjects demonstrated that the method can guide an ultrasound transducer to a target view with an average rotational and translation accuracy of 3.3 ± 2.6° and 2.0 ± 1.6 mm respectively, when the probe is close to the target (<5 mm). We believe that this accuracy would be sufficient to find the image on which the user can make quick, qualitative evaluations such as the detection of pericardial effusion, cardiac activity (squeeze, mitral valve motion, cardiac arrest, etc.), as well as performing quantitative calculations such as ejection fraction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.