Abstract

Motion estimation and segmentation are both critical steps in identifying and assessing myocardial dysfunction, but are traditionally treated as unique tasks and solved as separate steps. However, many motion estimation techniques rely on accurate segmentations. It has been demonstrated in the computer vision and medical image analysis literature that both these tasks may be mutually beneficial when solved simultaneously. In this work, we propose a multi-task learning network that can concurrently predict volumetric segmentations of the left ventricle and estimate motion between 3D echocardiographic image pairs. The model exploits complementary latent features between the two tasks using a shared feature encoder with task-specific decoding branches. Anatomically inspired constraints are incorporated to enforce realistic motion patterns. We evaluate our proposed model on an in vivo 3D echocardiographic canine dataset. Results suggest that coupling these two tasks in a learning framework performs favorably when compared against single task learning and other alternative methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.