Abstract
Multi-instance learning (MIL) is a widely applied technique in practical applications that involve complex data structures. MIL can be broadly categorized into two types: traditional methods and those based on deep learning. These approaches have yielded significant results, especially regarding their problem-solving strategies and experiment validation, providing valuable insights for researchers in the MIL field. However, considerable knowledge is often trapped within the algorithm, leading to subsequent MIL algorithms that rely solely on the model's data fitting to predict unlabeled samples. This results in a significant loss of knowledge and impedes the development of more powerful models. In this article, we propose a novel data-driven knowledge fusion for deep MIL (DKMIL) algorithm. DKMIL adopts a completely different idea from existing deep MIL methods by analyzing the decision-making of key samples in the dataset (referred to as the data-driven) and using the knowledge fusion module designed to extract valuable information from these samples to assist the model's learning. In other words, this module serves as a new interface between data and the model, providing strong scalability and enabling prior knowledge from existing algorithms to enhance the model's learning ability. Furthermore, to adapt the downstream modules of the model to more knowledge-enriched features extracted from the data-driven knowledge fusion (DDKF) module, we propose a two-level attention (TLA) module that gradually learns shallow-and deep-level features of the samples to achieve more effective classification. We will prove the scalability of the knowledge fusion module and verify the efficiency of the proposed architecture by conducting experiments on 62 datasets across five categories.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on neural networks and learning systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.