Matching a reference image to a secondary image extracted from a database of transformed exemplars constitutes an important image retrieval task. Two related problems are: specification of a general class of discriminatory image features and an appropriate similarity measure to rank the closeness of the query to the database. In this paper we present a general method based on matching high dimensional image features, using entropic similarity measures that can be empirically estimated using entropic graphs such as the minimal spanning tree (MST). The entropic measures we consider are generalizations of the well-known Kullback–Liebler (KL) distance, the mutual information (MI) measure, and the Jensen difference. Our entropic graph approach has the advantage of being implementable for high dimensional feature spaces for which other entropy-based pattern matching methods are computationally difficult. We compare our technique to previous entropy matching methods for a variety of continuous and discrete features sets including: single pixel gray levels; tag sub-image features; and independent component analysis (ICA) features. We illustrate the methodology for multimodal face retrieval and ultrasound (US) breast image registration.
Read full abstract