Abstract

Identifying clusters, namely groups of nodes with comparatively strong internal connectivity, is a fundamental task for deeply understanding the structure and function of a network. By means of a lumped Markov chain model of a random walker, we propose two novel ways of inferring the lumped markov transition matrix. Furthermore, some useful results are proposed based on the analysis of the properties of the lumped Markov process. To find the best partition of complex networks, a novel framework including two algorithms for network partition based on the optimal lumped Markovian dynamics is derived to solve this problem. The algorithms are constructed to minimize the objective function under this framework. It is demonstrated by the simulation experiments that our algorithms can efficiently determine the probabilities with which a node belongs to different clusters during the learning process and naturally supports the fuzzy partition. Moreover, they are successfully applied to real-world network, including the social interactions between members of a karate club.

Highlights

  • The theory of network science has significantly improved our understanding of complex systems

  • We address the expression of lumped markov transition matrix for networks with two novel method in this paper

  • This can be considered as a generalization of markov random walk dynamic in statistics for the networks

Read more

Summary

Introduction

The theory of network science has significantly improved our understanding of complex systems. Markov chains are frequently used as analytic models in the quantitative evaluations of stochastic systems. Examples of their use may be found in diverse areas such as computer, biological, physical and social sciences as well as in business, economics and engineering [2,3,4]. There is a wide class of situations, where the modeler does not need information about each state of the system but about classes of states only This leads to the consideration of a new process, to be called the aggregated or lumped, whose states are the state classes of the original Markov chain. In order to be able to utilize all the power of the Markov chain theory, it is important to be able to claim that for a given initial distribution the aggregated process has the Markov property

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call