Abstract

Background:In recent years, deep learning techniques have demonstrated promising performances in echocardiography (echo) data segmentation, which constitutes a critical step in the diagnosis and prognosis of cardiovascular diseases (CVDs). However, their successful implementation requires large number and high-quality annotated samples, whose acquisition is arduous and expertise-demanding. To this end, this study aims at circumventing the tedious, time-consuming and expertise-demanding data annotation involved in deep learning-based echo data segmentation. Methods:We propose a two-phase framework for fast generation of annotated echo data needed for implementing intelligent cardiac structure segmentation systems. First, multi-size and multi-orientation cardiac structures are simulated leveraging polynomial fitting method. Second, the obtained cardiac structures are embedded onto curated endoscopic ultrasound images using Fourier Transform algorithm, resulting in pairs of annotated samples. The practical significance of the proposed framework is validated through using the generated realistic annotated images as auxiliary dataset to pretrain deep learning models for automatic segmentation of left ventricle and left ventricle wall in real echo data, respectively. Results:Extensive experimental analyses indicate that compared with training from scratch, fine-tuning after pretraining with the generated dataset always results in significant performance improvement whereby the improvement margins in terms of Dice and IoU can reach 12.9% and 7.74%, respectively. Conclusion:The proposed framework has great potential to overcome the shortage of labeled data hampering the deployment of deep learning approaches in echo data analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call