Abstract

Existing Multi-Instance learning (MIL) methods for medical image classification typically segment an image (bag) into small patches (instances) and learn a classifier to predict the label of an unknown bag. Most of such methods assume that instances within a bag are independently and identically distributed. However, instances in the same bag often interact with each other. In this paper, we propose an Induced SelfAttention based deep MIL method that uses the self-attention mechanism for learning the global structure information within a bag. To alleviate the computational complexity of the naive implementation of self-attention, we introduce an inducing point based scheme into the self-attention block. We show empirically that the proposed method is superior to other deep MIL methods in terms of performance and interpretability on three medical image data sets. We also employ a synthetic MIL data set to provide an intensive analysis of the effectiveness of our method. The experimental results reveal that the induced self-attention mechanism can learn very discriminative and different features for target and non-target instances within a bag, and thus fits more generalized MIL problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call