Abstract
The increasing size of medical image archives and the complexity of medical images have led to the development of medical content-based image retrieval (CBIR) systems. These systems use the visual content of images for image retrieval in addition to conventional textual annotation, and have become a useful technique in biomedical data management. Existing CBIR systems are typically designed for use with single-modality images, and are restricted when multi-modal images, such as co-aligned functional positron emission tomography and anatomical computed tomography (PET/CT) images, are considered. Furthermore, the inherent spatial relationships among adjacent structures in biomedical images are not fully exploited. In this study, we present an innovative retrieval system for dual-modality PET/CT images by proposing the use of graph-based methods to spatially represent the structural relationships within these images. We exploit the co-aligned functional and anatomical information in PET/CT, using attributed relational graphs (ARG) to represent both modalities spatially and applying graph matching for similarity measurements. Quantitative evaluation demonstrated that our dual-modal ARG enabled the CBIR of dual-modality PET/CT. The potential of our dual-modal ARG in clinical application was also explored.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.