Abstract

A conventional framework for learning generic translation-invariant 2nd-order Markov-Gibbs random field (MGRF) models of spatially homogeneous textures is extended onto higher-order ones, which are also invariant to arbitrary perceptive (contrast-offset) signal deviations. Given a training image, the framework estimates both the geometry and strengths (potentials) of multiple conditional signal dependencies, called interactions. The potentials are approximated analytically, and characteristic interactions are selected by analysing an empirical distribution of energies (sums of the potentials) for a large number of candidate 3rd- and 4th-order interactions. Descriptive abilities of the learned generic translation- and contrast/offset-invariant 2nd-4th-order MGRFs are tested on 50 classes of textures from the Brodatz and OUTEX databases in application to semi-supervised texture recognition. Comparing to our previous work [11], contributions of this paper are two-fold. (i) To analyse the classification performance trend, the MGRF models have been extended up to the 4th order. (ii) In order to select characteristic interactions, a heuristic iterative application of unimodal thresholding to the energy distribution in [11] is replaced by estimating dominant modes of this distribution. The latter is approximated with a Gaussian mixture, using the Expectation-Maximization (EM) algorithm, the number of the mixture components having been determined by the Akaike Information Criterion (AIC). The goal interactions are selected by either unimodal thresholding or finding an intersection between the mixture components related to the lowest and the second-lowest energy modes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call