Radiology images of the chest, such as computer tomography scans and X-rays, have been prominently used in computer-aided COVID-19 analysis. Learning-based radiology image retrieval has attracted increasing attention recently, which generally involves image feature extraction and finding matches in extensive image databases based on query images. Many deep hashing methods have been developed for chest radiology image search due to the high efficiency of retrieval using hash codes. However, they often overlook the complex triple associations between images; that is, images belonging to the same category tend to share similar characteristics and vice versa. To this end, we develop a triplet-constrained deep hashing (TCDH) framework for chest radiology image retrieval to facilitate automated analysis of COVID-19. The TCDH consists of two phases, including (a) feature extraction and (b) image retrieval. For feature extraction, we have introduced a triplet constraint and an image reconstruction task to enhance discriminative ability of learned features, and these features are then converted into binary hash codes to capture semantic information. Specifically, the triplet constraint is designed to pull closer samples within the same category and push apart samples from different categories. Additionally, an auxiliary image reconstruction task is employed during feature extraction to help effectively capture anatomical structures of images. For image retrieval, we utilize learned hash codes to conduct searches for medical images. Extensive experiments on 30,386 chest X-ray images demonstrate the superiority of the proposed method over several state-of-the-art approaches in automated image search. The code is now available online.
Read full abstract