Abstract

Cardiovascular diseases (CVDs) are the leading cause of mortality worldwide. Cardiac image and mesh are two primary modalities to present the shape and structure of the heart and have been demonstrated to be efficient in CVD prediction and diagnosis. However, previous research has been generally focussed on a single modality (image or mesh), and few of them have tried to jointly consider the image and mesh representations of heart. To obtain efficient and explainable biomarkers for CVD prediction and diagnosis, it is needed to jointly consider both representations. We design a novel multi-channel variational auto-encoder, mesh-image variational auto-encoder, to learn joint representation of paired mesh and image. After training, the shape-aware image representation (SAIR) can be learned directly from the raw images and applied for further CVD prediction and diagnosis. We demonstrate our method on data from UK Biobank study and two other datasets via extensive experiments. In acute myocardial infarction prediction, SAIR achieves 81.43% accuracy, significantly higher than traditional biomarkers like metadata and clinical indices (left ventricle and right ventricle clinical indices of cardiac function like chamber volume, mass, and ejection fraction). Our mesh-image variational auto-encoder provides a novel approach for 3D cardiac mesh reconstruction from images. The extraction of SAIR is fast and without need of segmentation masks, and its focussing can be visualized in the corresponding cardiac meshes. SAIR archives better performance than traditional biomarkers and can be applied as an efficient supplement to them, which is of significant potential in CVD analysis.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.