Multi-view learning is effective in improving data classification accuracy through integrating information from multiple sources. To guarantee the reliability of multi-view classification, the trusted multi-view classification methods have been explored. However, extant trusted multi-view classification methods are still vulnerable to the low-quality views containing adversarial samples. This vulnerability arises from a challenge in accurately assessing the quality of data views that are subjected to adversarial attacks. To address this issue, we propose a robust multi-view classification method with dissonance measure of adversarial samples. Specifically, the proposed method utilizes the evidential dissonance measure in subjective logic to assess the quality of data views when encountering adversarial attacks. Based on the dissonance measure, we further propose a strategy of dissonance-aware belief integration for multi-view information fusion and construct the inter-view evidential gradient penalty in the learning objective of multi-view classification to improve the model robustness to resist adversarial samples. Experiments on diverse multi-view datasets confirm the reliability and robustness of the proposed multi-view classification method.
Read full abstract