Abstract

Kidney development is key to the long-term health of the fetus. Renal volume and vascularity assessed by 3D ultrasound (3D-US) are known markers of wellbeing, however, a lack of real-time image segmentation solutions preclude these measures being used in a busy clinical environment. In this work, we aimed to automate kidney segmentation using fully convolutional neural networks (fCNNs). We used multi-parametric input fusion incorporating 3D B-Mode and power Doppler (PD) volumes, aiming to improve segmentation accuracy. Three different fusion strategies and their performance were assessed versus a single input (B-Mode) network. Early input-level fusion provided the best segmentation accuracy with an average Dice similarity coefficient (DSC) of 0.81 and Hausdorff distance (HD) of 8.96mm, an improvement of 0.06 DSC and reduction of 1.43mm HD compared to our baseline network. Compared to manual segmentation for all models, repeatability was assessed by intra-class correlation coefficients (ICC) indicating good to excellent reproducibility (ICC 0.93). The framework was extended to support multiple graphics processing units (GPUs) to better handle volumetric data, dense fCNN models, batch normalization and complex fusion networks. This work and available source code provides a framework to increase the parameter space of encoder-decoder style fCNNs across multiple GPUs and shows that application of multi-parametric 3D-US in fCNN training improves segmentation accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call