Abstract

As a fundamental data structure, graph has been widely used in machine learning, data mining, and computer vision. However, graph based analysis with respect to kernel method, spectral clustering and manifold learning can reach time complexity of O(m <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">3</sup> ), where m is the data size. In particular, the problem becomes intractable when data is in large-scale. Recently, low-rank matrix approximation draws considerable attentions since it can extract essential parts that are responsible for most actions of the matrix. Nonetheless, the structure information embedded in the massive data is inevitably ignored. In this paper, we argue that vector quantization can better reveal the intrinsic structure of the large-scale data and both intra- and inter-cluster matrices should be taken advantage of to boost the accuracy of low-rank matrix approximation. Considering both inter- and intra-relationships, we can reach a better trade-off on different kinds of graphs. Extensive experiments demonstrate that the proposed framework not only keeps lower time complexity but also performs comparably with the state of the art.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call