Abstract

3-D +t echocardiography (3DtE) is widely employed for the assessment of left ventricular anatomy and function. However, the information derived from 3DtE images can be affected by the poor image quality and the limited field of view. Registration of multiview 3DtE sequences has been proposed to compound images from different acoustic windows, therefore improving both image quality and coverage. We propose a novel subspace error metric for an automatic and robust registration of multiview intrasubject 3DtE sequences. The proposed metric employs linear dimensionality reduction to exploit the similarity in the temporal variation of multiview 3DtE sequences. The use of a low-dimensional subspace for the computation of the error metric reduces the influence of image artefacts and noise on the registration optimization, resulting in fast and robust registrations that do not require a starting estimate. The accuracy, robustness, and execution time of the proposed registration were thoroughly validated. Results on 48 pairwise multiview 3DtE registrations show the proposed error metric to outperform a state-of-the-art phase-based error metric, with improvements in median/75th percentile of the target registration error of 21%/31% and an improvement in mean execution time of 45%. The proposed subspace error metric outperforms sum-of-squared differences and phase-based error metrics for the registration of multiview 3DtE sequences in terms of accuracy, robustness, and execution time. The use of the proposed subspace error metric has the potential to replace standard image error metrics for a robust and automatic registration of multiview 3DtE sequences.

Highlights

  • Echocardiography imaging is routinely employed for the assessment of Left Ventricular (LV) anatomy and function due to its high temporal and spatial resolution, non-ionising nature, low cost and portability

  • The accuracy and robustness of this registration is affected by the 3D+t echocardiography (3DtE) angle-dependent image quality and image artefacts, which make standard image error metrics, such as sumof-squared differences (SSD) or normalised cross-correlation (NCC), unsuitable for this task [9]

  • For the multi-view registration, target registration errors [23] were computed using as targets the world coordinates of each voxel in the overlapping domain of the target sequence and the source sequence transformed with the ground-truth transformation

Read more

Summary

Introduction

Echocardiography imaging is routinely employed for the assessment of Left Ventricular (LV) anatomy and function due to its high temporal and spatial resolution, non-ionising nature, low cost and portability. LV structures appear significantly different depending on the 3DtE beam incidence angle and acoustic window, impairing the assessment of LV anatomy and function. Several studies [4], [5], [6], [7], [8] have shown the advantages of combining 3DtE images from multiple acoustic windows for a range of applications, from LV segmentation to motion and strain estimation. Fusion of multi-view 3DtE sequences requires the correct registration of the LV geometry from different acoustic windows. The accuracy and robustness of this registration is affected by the 3DtE angle-dependent image quality and image artefacts, which make standard image error metrics, such as sumof-squared differences (SSD) or normalised cross-correlation (NCC), unsuitable for this task [9]

Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.