In this paper, we propose a novel visual tracking method based on conditional uncertainty minimization (CUM), minibatch Monte Carlo (MMC), and non-nested sampling (NNS). We represent a target as a Markov network with nodes and edges, where each node corresponds to the corresponding pixel of the target and each edge describes the relations among the pixels. The nodes are then grouped into optimal cliques using the proposed CUM, which minimizes the conditional uncertainty (i.e. the variance of the conditional expectation) between two cliques. The aforementioned minimization process is facilitated using the proposed NNS. During visual tracking, Markov networks evolve across frames and describe the geometrically varying appearances of the target. In many cases, these networks cannot represent the targets perfectly; however, the configurations of the target can be inferred accurately using the CUM and the best configuration can be found at an early stage of the Monte Carlo sampling using the proposed MMC. The numerical results demonstrate that our method qualitatively and quantitatively outperforms other state-of-the-art trackers on standard benchmark datasets. In particular, our method accurately tracks deformable objects in realtime.