Influence maximization (IM) is a critical aspect of complex network analysis, as it holds considerable commercial value across various domains such as recommendation systems and viral marketing. Its objective is to identify a set of seed nodes that can initiate the most extensive cascade of influence within the network. However, IM encounters significant challenges due to its NP-hard nature of accurately identifying the optimal seed nodes and estimating the spread of influence. Although recent learning-based IM methods have shown promising results, there are still challenges related to scalability, efficiency, and avoiding influence overlap that need to be addressed. This paper proposes CoreQ, a novel reinforcement learning-based framework that leverages K-core hierarchies to address IM challenges and make advised decisions to learn the optimal policy to select seed nodes. CoreQ significantly enhances IM by reducing computational costs and effectively addressing the influence overlap problem, making it a scalable and efficient solution. CoreQ first utilizes K-core decomposition to analyze the network structure and extract hierarchical features that guide the process of seed node identification. Then, it employs a novel maximum likelihood-based approach to identify a sufficient number of candidate seed nodes from K-core hierarchies. Finally, it leverages Q-learning’s decision-making capabilities to optimize a policy for learning the most effective seed selection strategy. The experimental results demonstrate that CoreQ not only outperforms state-of-the-art IM methods in terms of influence spread under the weighted independent cascade model but also surpasses other competing learning-based IM methods in terms of time efficiency.
Read full abstract