Abstract

Identifying shoe-print impressions in the scene of crime (SoC) from database images is a challenging problem in forensic science due to the complicated impressing surface, the partial absence of on-site impressions, and the huge domain gap between the query and the gallery images. The existing approaches pay much attention to feature extraction while ignoring its distinctive characteristics. In this paper, we propose a novel multi-part weighted convolutional neural network (MP-CNN) for shoe-print image retrieval. Specifically, the proposed CNN model processes images in three steps: 1) dividing the input images vertically into two parts and extracting sub-features by a parameter-shared network individually; 2) calculating the importance weight matrix of the sub-features based on the informative pixels they contained and concatenating them as the final feature, and; 3) using the triplet loss function to measure the similarity between the query and the gallery images. In addition to the proposed network, we adopt an effective strategy to enhance the quality of the images and to reduce the domain gap using the U-Net structure. The experimental evaluations demonstrate that our proposed method significantly outperforms other fine-grained cross-domain methods on SPID dataset and obtains comparative results with the state-of-the-art shoe-print retrieval methods on FID300 dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call