Abstract

The classic influence maximization problem explores the strategies for deploying cascades such that the total influence is maximized, and it assumes that the seed nodes that initiate the cascades are computed prior to the diffusion process. In its adaptive version, the seed nodes are allowed to be launched in an adaptive manner after observing certain diffusion results. In this article, we provide a systematic study on the adaptive influence maximization problem, focusing on the algorithmic analysis of the general feedback models. We introduce the concept of regret ratio to characterize the key trade-off in designing adaptive seeding strategies, based on which we present the approximation analysis for the well-known greedy policy. In addition, we provide analysis concerning improving the efficiencies and bounding the regret ratio. Finally, we propose several future research directions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call