Abstract

We introduce a definition of algorithmic symmetry in the context of geometric and spatial complexity able to capture mathematical aspects of different objects using as a case study polyominoes and polyhedral graphs. We review, study and apply a method for approximating the algorithmic complexity (also known as Kolmogorov–Chaitin complexity) of graphs and networks based on the concept of Algorithmic Probability (AP). AP is a concept (and method) capable of recursively enumerate all properties of computable (causal) nature beyond statistical regularities. We explore the connections of algorithmic complexity—both theoretical and numerical—with geometric properties mainly symmetry and topology from an (algorithmic) information-theoretic perspective. We show that approximations to algorithmic complexity by lossless compression and an Algorithmic Probability-based method can characterize spatial, geometric, symmetric and topological properties of mathematical objects and graphs.

Highlights

  • The literature on connections between symmetry and complexity is sparse, in particular ito information theory and algorithmic complexity

  • We show that the algorithmic probability (AP)-based measures either constitute an equivalent or a superior alternative to other more limited measures, such as lossless compression algorithms, widely used as estimators of algorithmic complexity, and to Shannon entropy and related measures that are based on traditional statistics and require that broad assumptions be encoded in their underlying probability distributions

  • While some of them may continue to be of interest in approaches to molecular similarity, here we have instead explored more universal approaches to the problem of feature-free approximations of the complexity of objects both geometric and topological

Read more

Summary

Introduction

The literature on connections between symmetry and complexity is sparse, in particular ito information theory and algorithmic complexity. In [1], a relation between symmetry and entropy was suggested in the context of molecular complexity, thereby establishing connections between low symmetry and low entropy or higher (classical—as opposed to algorithmic) information content. A measure of structural complexity must not be based on classical symmetry. The use of symmetry in the realm of complexity is justified only in the context of combinatorial complexity, establishing equivalences among the diversity of elements of an object with symmetry by way of statistical regularities. It is clear that classical and algorithmic information theory measure different properties. Shannon entropy does not provide a method to have access to the probability distributions thereby heavily relying on ensemble assumptions to which an object in question is supposed to belong

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.