Abstract

One way of resolving the problem of scarce and expensive data in deep learning for medical applications is using transfer learning and fine-tuning a network which has been trained on a large data set. The common practice in transfer learning is to keep the shallow layers unchanged and to modify deeper layers according to the new data set. This approach may not work when using a U-Net and when moving from a different domain to ultrasound (US) images due to their drastically different appearance. In this study, we investigated the effect of fine-tuning different sets of layers of a pretrained U-Net for US image segmentation. Two different schemes were analyzed, based on two different definitions of shallow and deep layers. We studied simulated US images, as well as two human US data sets. We also included a chest X-ray data set. The results showed that choosing which layers to fine-tune is a critical task. In particular, they demonstrated that fine-tuning the last layers of the network, which is the common practice for classification networks, is often the worst strategy. It may therefore be more appropriate to fine-tune the shallow layers rather than deep layers in US image segmentation when using a U-Net. Shallow layers learn lower level features which are critical in automatic segmentation of medical images. Even when a large US data set is available, we observed that fine-tuning shallow layers is a faster approach compared to fine-tuning the whole network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call