Abstract

Representation learning is a critical task for medical image analysis in computer-aided diagnosis. However, it is challenging to learn discriminative features due to the limited size of the data set and the lack of labels. In this article, we propose a deep graph-based multimodal feature embedding (DGMFE) framework for medical image retrieval with application to breast tissue classification by learning discriminative features of probe-based confocal laser endomicroscopy (pCLE). We first build a multimodality graph model based on the visual similarity between pCLE data and reference histology images. The latent similar pCLE-histology pairs are extracted by walking with the cyclic path on the graph while the dissimilar pairs are extracted based on the geodesic distance. Given the similar and dissimilar pairs, the latent feature space is discovered by reconstructing the similarity between pCLE and histology images via deep Siamese neural networks. The proposed method is evaluated on a clinical database with 700 pCLE mosaics. The accuracy of image retrieval demonstrates that DGMFE can outperform previous works on feature learning. Especially, the top-1 accuracy in an eight-class retrieval task is 0.739, thus demonstrating a 10% improvement compared to the state-of-the-art method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.