Abstract

Information-theoretic-based measures have been useful in quantifying network complexity. Here we briefly survey and contrast (algorithmic) information-theoretic methods which have been used to characterize graphs and networks. We illustrate the strengths and limitations of Shannon’s entropy, lossless compressibility and algorithmic complexity when used to identify aspects and properties of complex networks. We review the fragility of computable measures on the one hand and the invariant properties of algorithmic measures on the other demonstrating how current approaches to algorithmic complexity are misguided and suffer of similar limitations than traditional statistical approaches such as Shannon entropy. Finally, we review some current definitions of algorithmic complexity which are used in analyzing labelled and unlabelled graphs. This analysis opens up several new opportunities to advance beyond traditional measures.

Highlights

  • Networks, which are used extensively in science and engineering, are often complex when representing static and dynamic data where edges are relations among objects or events

  • Popular implementations of lossless compression algorithms used to estimate algorithmic information content such as those based on the Lempel-Ziv (LZ) algorithm can effectively be implemented using Finite State Automata (FSA) [2]

  • To be able to capture all possible computable properties the full power of compression implied by algorithmic complexity is needed and requires the computational power equivalent to a universal Turing machine not currently present in popular implementations of lossless compression such as LZ

Read more

Summary

Introduction

Networks, which are used extensively in science and engineering, are often complex when representing static and dynamic data where edges are relations among objects or events. Popular implementations of lossless compression algorithms used to estimate algorithmic information content such as those based on the Lempel-Ziv (LZ) algorithm can effectively be implemented using Finite State Automata (FSA) [2] This means that they do not hold sufficient computational power to characterize all the features in data [3]. The use of popular implementations of lossless compression algorithms, which have been widely used to approximate algorithmic complexity is in practice, a very minor improvement over classical Shannon information indexes [1] and can only capture statistical regularities at their respective computational power, i.e., missing relevant algorithmic properties. Vertices are randomly and independently connected by links using a fixed prescribed probability ( called edge density) (see Figure 2 for a comparison between a regular and a random graph of the same size). Many empirical networks display a power-law degree distribution

Classical Information Theory
Classical Information and Entropy of Graphs
Moving Towards Algorithmic Complexity of Graphs
Lossless Compression in Network Complexity
Alternatives to Lossless Compression
Algorithmic Information Theory
Algorithmic Probability
Approximations to Graph Algorithmic Complexity
Reconstructing K of Graphs from Local Patterns
Group-Theoretic Robustness of Algorithmic Graph Complexity
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.