Abstract

Background Interactive echocardiography translation is an efficient educational function to master cardiac anatomy. It strengthens the student's understanding by pixel-level translation between echocardiography and theoretically sketch images. Previous research studies split it into two aspects of image segmentation and synthesis. This split makes it hard to achieve pixel-level corresponding translation. Besides, it is also challenging to leverage deep-learning-based methods in each phase where a handful of annotations are available. Methods To address interactive translation with limited annotations, we present a two-step transfer learning approach. Firstly, we train two independent parent networks, the ultrasound to sketch (U2S) parent network and the sketch to ultrasound (S2U) parent network. U2S translation is similar to a segmentation task with sector boundary inference. Therefore, the U2S parent network is trained with the U-Net network on the public segmentation dataset of VOC2012. S2U aims at recovering ultrasound texture. So, the S2U parent network is decoder networks that generate ultrasound data from random input. After pretraining the parent networks, an encoder network is attached to the S2U parent network to translate ultrasound images into sketch images. We jointly transfer learning U2S and S2U within the CGAN framework. Results and conclusion. Quantitative and qualitative contrast from 1-shot, 5-shot, and 10-shot transfer learning show the effectiveness of the proposed algorithm. The interactive translation is achieved with few-shot transfer learning. Thus, the development of new applications from scratch is accelerated. Our few-shot transfer learning has great potential in the biomedical computer-aided image translation field, where annotation data are extremely precious.

Highlights

  • Interactive echocardiography translation is an efficient educational function to master cardiac anatomy

  • A more efficient method of interactive translation between ultrasound images and theoretically sketch images is still lacking. is causes the image processing difficulties in our case: echocardiography is characterized by the deformable appearance and poor spatial resolution, while limited annotations are available, building obstacles to achieve good performance as well as leverage state-of-the-art deep learning methods

  • It is addressed with the following methods: Level set (LS) [1] segmentation, Deformable templates [2, 3], Active shape models (ASM) [4, 5], Active contour methods, Active appearance models (AAM), Bottom-up approaches, and Database-guided (DB-guided) segmentation

Read more

Summary

Background

Echocardiography education has dramatically helped students to master cardiac structure assessment by combining cardiac ultrasound images with simulators. U2S is often specified in the segmentation task It is addressed with the following methods: Level set (LS) [1] segmentation, Deformable templates [2, 3], Active shape models (ASM) [4, 5], Active contour methods, Active appearance models (AAM), Bottom-up approaches, and Database-guided (DB-guided) segmentation. Some improvements in combining ultrasound recording as a template to synthetic realistic speckle textures are proposed to address the above issue [8, 9]. Those approaches unavoidably introduced unrealistic warping in simulated speckle texture.

Methods
Results
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call