Abstract

Submodular function maximization is a typical combinatorial optimization problem that arises in many areas of computer science, such as data summarization, batch mode active learning, and recommendations. We focus on the stochastic submodular maximization (SSM) problems, where an unknown monotonic submodular function is to be maximized under cardinality constraints. Previous research commonly relied on the Monte-Carlo method to assess the stochasticity in submodular functions, which is challenged by the accuracy–efficiency dilemma. Achieving high accuracy requires a large number of Monte-Carlo simulations, while efficiency necessitates the minimization of simulation costs. In this paper, we propose a new family of greedy algorithms based on learning automata (LA) for SSM problems. By exploring the solution space more efficiently, the proposed LA-based greedy family accelerates conventional greedy algorithms. Instead of massively evaluating the expected benefits of each element indiscriminately, an LA-based learning strategy is designed to quickly identify the optimal element among all candidates. Based on the strategy, we present a primitive greedy algorithm and two improved variants, along with proofs of the same approximation guarantee as conventional greedy algorithms. We conducted experiments on three representative SSM tasks, nonparametric learning for structured data, exemplar-based clustering for unstructured data, and influence maximization for graph data. The results verified the advantage of the proposed LA-based greedy family against previous greedy algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call