Abstract

Graph construction from data constitutes a pre-stage in many machine learning and computer vision tasks, like semi-supervised learning, manifold learning, and spectral clustering. The influence of graph construction procedures on learning tasks and their related applications has only received limited study despite its critical impact on accuracy. State-of-the-art graphs are built via sparse coding adopting ℓ1 regularization. Those graphs exhibit good performance in many computer vision applications. However, the locality and similarity among instances are not explicitly used in the coding scheme. Furthermore, due to the use of ℓ1 regularization, these construction approaches can be computationally expensive. In this paper, we investigate graph construction using the data self-representativeness property. By incorporating a variant of locality-constrained linear coding (LLC), we introduce and derive four variants for graph construction. These variants adopt a two phase LLC (TPLLC). Compared with the recent ℓ1 graphs, our proposed objective function, associated with three variants, has an analytical solution, and thus, is more efficient. A key element of the proposed methods is the second phase of coding that allows data closeness, or locality, to be naturally incorporated. It performs a coding over some selected relevant samples and reinforces the individual regularization terms by exploiting the coefficients estimated in the first phase. Comprehensive experimental results using several benchmark datasets show that it can achieve or outperform existing state-of-the-art results. Furthermore, it is shown to be more efficient than the robust ℓ1 graph construction schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call