Abstract

AbstractOne challenge of training deep neural networks with gigapixel whole-slide images (WSIs) is the lack of annotation at pixel level or patch (instance) level due to the high cost and time-consuming labeling effort. Multiple instance learning (MIL) as a typical weakly supervised learning method aimed to resolve this challenge by using only the slide-level label without needing patch labels. Not all patches/instances are predictive of the outcome. The attention-based MIL method leverages this fact to enhance the performance by weighting the instances based on their contribution to predicting the outcome. A WSI typically contains hundreds of thousands of image patches. Training a deep neural network with thousands of image patches per slide is computationally expensive and requires a lot of time for convergence. One way to alleviate this issue is to sample a subset of instances/patches from the available instances within each bag for training. While the benefit of sampling strategies for decreasing computing time might be evident, there is a lack of effort to investigate their performances. This project proposes and compares an adaptive sampling strategy with other sampling strategies. Although all sampling strategies substantially reduce computation time, their performance is influenced by the number of selected instances. We show that if we are limited to only select a few instances (e.g., in order of 1\(\sim \)10 instances), the adaptive sampling outperforms other sampling strategies. However, if we are allowed to select more instances (e.g., in order of 100\(\sim \)1000 instances), the random sampling outperforms other sampling strategies.KeywordsAttentionComputational pathologyDeep learningMultiple instance learningProstate cancerSamplingTransfer learningWeekly supervised learningSecond keyword

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call