Abstract

We consider a long-standing yet hard and largely open machine learning problem of anomaly areas detection in multimodal 3D images. Purely data-driven methods often fail in such tasks because rarely incorporating domain-specific knowledge into the algorithm and do not fully utilize information from multiple modalities. We address these issues by proposing a novel framework with data fusion technology to leverage domain-specific knowledge and multimodal labeled data, as well as employ the power of randomized learning techniques. To demonstrate the proposed framework efficiency, we apply it to the challenging task of detecting subtle pathologies in MRI scans. A distinct feature of the resulting solution is that it explicitly incorporates evidence-based medical knowledge about pathologies into the feature maps. Our experiments show that the method is capable of achieving lesion detection in 71% of subjects by using just one such feature. Integrating information from all feature maps and data modalities enhances detection rate to 78%. Using stochastic configuration networks to initialize the weights of the classification model enables to increase precision metric by 18% as compared to deterministic approaches. This demonstrates the possibility and practical viability of building efficient and interpretable randomised algorithms for automated anomaly detection in complex multimodal data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.