Abstract

Hypothesis pruning is an important prerequisite while working with outlier-contaminated data in many computer vision problems. However, the underlying random data structures are barely explored in the literature, limiting designing efficient algorithms. To this end, we provide a novel graph-theoretic perspective on hypothesis pruning exploiting invariant structures of data. We introduce the planted clique model, a central object in computational statistics, to investigate the information-theoretical and computational limits of the hypothesis pruning problem. In addition, we propose an inductive learning framework for finding hidden cliques that learns heuristics on synthetic graphs with planted cliques and generalizes to real vision problems. We present competitive experimental results with large runtime improvement on synthetic and widely used vision datasets to show its efficacy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call