Abstract

Recent attention has been placed on interactive segmentation for specialized tasks where specialist input is required to further amplify the segmentation performance. In this work, we propose a novel interactive segmentation architecture as well as loss function, where user clicks are dynamically altered in size based upon the current segmentation mask. A weight map is formed based upon the users selected regions and is later passed into a deep neural network as a novel weighted loss function. An interactive U-Net (IU-Net) model which applies both foreground and background user clicks as the main method of interaction is employed to evaluate our loss function. As an addition to the IU-Net, we propose the use of a two-stream fusion interactive U-Net (TSFIU-Net) which applies multimodal fusion properties to allow for the propagation of image feature information throughout the architecture. This model is also tested with the same loss function and dynamically changing click sizes to determine the increase in accuracy. We experiment on spleen and colon cancer CT data from the MSD dataset held at MICCAI 2018 and improve the overall segmentation accuracy in comparison to the standard U-Net using our weighted loss function. Dynamic user click sizes enhances accuracy by 8.88% and 2.16% respectively by utilizing only a single user interaction on the IU-Net and by 13.9% and 3.92% on the TSFIU-Net.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call