Abstract

We propose a new computational model for the study of massive data processing. Our model measures the complexity of reading the input data in terms of their very large size N and analyzes the computational cost in terms of a parameter k that characterizes the computational power provided by limited local computing resources. We develop new algorithmic techniques for solving well-known computational problems on the model. In particular, randomized algorithms of running time O(N+g1(k)) and space O(k2), with very high probability, are developed for the famous graph matching problem on unweighted and weighted graphs. More specifically, our algorithm for unweighted graphs finds a k-matching (i.e., a matching of k edges) in a general unweighted graph in time O(N+k2.5), and our algorithm for weighted graphs finds a maximum weighted k-matching in a general weighted graph in time O(N+k3log⁡k).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call