Abstract

Due to the diversity of image sources, content-based multi-source image fusion and retrieval have shown promising capabilities in computer vision tasks, and especially when applied in Computer-Aided Diagnosis (CAD) to automate and improve the accuracy of medical image analysis. The combination of computer vision and CAD systems has the potential to revolutionize healthcare by augmenting the expertise of clinicians, improving overall diagnostic accuracy and helping experts in the clinical decision-making process by classifying and retrieving similar annotated clinical images to a given query. In the context of multi-view mammography interpretation, the concept of multi-view feature fusion has recently been studied to improve retrieval performance while effectively guaranteeing the complementarity of both MLO and CC views. However, conventional multi-view feature fusion makes descriptors long and lacks to take into consideration the relationship between descriptors. To deal with this issue, we propose two hierarchical multi-view feature fusion methods, for multi-view mammogram retrieval, based on the Canonical Correlation Analysis (CCA), which is the most commonly used multivariate parametric test. In fact, we have adapted CCA to determine the relationship between two descriptors by processing latent correlation factors. Moreover, after extracting descriptors for each view, a comparative study of texture and shape fusion descriptors is proposed in order to identify the more discriminative features for multi-view mammogram retrieval. Then, a query-dependent distance metric preserving both visual resemblance and semantic similarity is carried out to dynamically determine the more appropriate distance measure for each query image. Extensive experiments on the challenging Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) have demonstrated the effectiveness of the proposed hierarchical multi-view feature fusion for mammogram retrieval, which outperforms the performance achieved either by conventional fused information or by single view information. To improve the transparency of our paper, the source code of the proposed method and the related dataset (including readme files) are publicly accessible through the following GitHub link: https://github.com/ABDERRAHIMMAR/Multi-View-Feature-Fusion-for-Mammogram-Retrieval . This open-access resource empowers researchers and practitioners to delve deeper into our methodology, fostering collaboration and advancements in the field of computer-aided diagnosis and medical image analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call