Abstract

Fetal development is noninvasively assessed by measuring the size of different structures in ultrasound (US) images. The reliability of these measurements is dependent upon the identification of the correct anatomical viewing plane, each of which contains different fetal structures. However, the automatic classification of the anatomical planes in fetal US images is challenging due to a number of factors, such as low signal-to-noise-ratios and the small size of the fetus. Current approaches for plane classification are limited to simpler subsets of the problem: only classifying planes within specific body regions or using temporal information from videos. In this paper, we propose a new general method for the classification of anatomical planes in fetal US images. Our method trains two convolutional neural networks to learn the best US and saliency features. The fusion of these features overcomes the challenges associated with US fetal imaging by emphasising the salient features within US images that best discriminate different planes. Our method achieved higher classification accuracy than a state-of-the-art baseline for 12 of the 13 different planes found in a clinical dataset of fetal US images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call