Abstract

As an effective way of routine prenatal diagnosis, ultrasound (US) imaging has been widely used recently. Biometrics obtained from the fetal segmentation shed light on fetal health monitoring. However, the segmentation in US images has strict requirements for sonographers on accuracy, making this task quite time-consuming and tedious. In this paper, we use DeepLabv3+ as the backbone and propose an Integrated Semantic and Spatial Information of Multi-level Features (ISSMF) based network to achieve the automatic and accurate segmentation of four parts of the fetus in US images while most of the previous works only segment one or two parts. Our contributions are threefold. First, to incorporate semantic information of high-level features and spatial information of low-level features of US images, we introduce a multi-level feature fusion module to integrate the features at different scales. Second, we propose to leverage the content-aware reassembly of features (CARAFE) upsampler to deeply explore the semantic and spatial information of multi-level features. Third, in order to alleviate performance degradation caused by batch normalization (BN) when batch size is small, we use group normalization (GN) instead. Experiments on four parts of fetus in US images show that our method outperforms the U-Net, DeepLabv3+ and the U-Net++ and the biometric measurements based on our segmentation results are pretty close to those derived from sonographers with ten-year work experience.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call