Abstract

The rapid advancements in high performance computing hardware and corresponding rise in deep convolutional neural network (CNN) architectures have led to state-of-the-art results in several biomedical image segmentation tasks. Recently, U-Net, a modified fully convolutional network, has become the state-of-the-art in various two-dimensional and three-dimensional semantic (pixellevel) segmentation tasks related to medicine. U-Net has achieved most success in terms of dealing with datasets where the well annotated ground truth is beyond reach. However, there has not been a detailed analysis on the impact of computing configurations, input data type, preprocessing, and data augmentation techniques on U-Nets training speed and end-to-end computation pipeline. In this paper, we trained U-Net on multiple configurations of GPU (single vs. multi-node) hardware in terms of memory and time efficiency towards finding optimal set up for U-Net segmentation tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call