Abstract
Given a large set U where each item a ∊ U has weight w(a), we want to estimate the total weight W = Σa∊U w(a) to within factor of 1 ± ∊ with some constant probability > 1/2. Since n = |U| is large, we want to do this without looking at the entire set U. In the traditional setting in which we are allowed to sample elements from U uniformly, sampling Ω(n) items is necessary to provide any non-trivial guarantee on the estimate. Therefore, we investigate this problem in different settings: in the proportional setting we can sample items with probabilities proportional to their weights, and in the hybrid setting we can sample both proportionally and uniformly. These settings have applications, for example, in sublinear-time algorithms and distribution testing. Sum estimation in the proportional and hybrid setting has been considered before by Motwani, Panigrahy, and Xu [ICALP, 2007]. In their paper, they give both upper and lower bounds in terms of n. Their bounds are near-matching in terms of n, but not in terms of ∊. In this paper, we improve both their upper and lower bounds. Our bounds are matching up to constant factors in both settings, in terms of both n and ∊. No lower bounds with dependency on ∊ were known previously. In the proportional setting, we improve their algorithm to . In the hybrid setting, we improve to . Our algorithms are also significantly simpler and do not have large constant factors. We then investigate the previously unexplored scenario in which n is not known to the algorithm. In this case, we obtain a algorithm for the proportional setting, and a algorithm for the hybrid setting. This means that in the proportional setting, we may remove the need for advice without greatly increasing the complexity of the problem, while there is a major difference in the hybrid setting. We prove that this difference in the hybrid setting is necessary, by showing a matching lower bound. Our algorithms have applications in the area of sublinear-time graph algorithms. Consider a large graph G = (V, E) and the task of (1 ± ∊)-approximating |E|. We consider the (standard) settings where we can sample uniformly from E or from both E and V. This relates to sum estimation as follows: we set U = V and the weights to be equal to the degrees. Uniform sampling then corresponds to sampling vertices uniformly. Proportional sampling can be simulated by taking a random edge and picking one of its endpoints at random. If we can only sample uniformly from E, then our results immediately give a algorithm. When we may sample both from E and V, our results imply an algorithm with complexity . Surprisingly, one of our subroutines provides an (1 ± ∊)-approximation of |E| using Õ(d/∊2) expected samples, where d is the average degree, under the mild assumption that at least a constant fraction of vertices are non-isolated. This subroutine works in the setting where we can sample uniformly from both V and E. We find this remarkable since it is O(1/∊2) for sparse graphs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.