Abstract

Information content and compression are tightly related concepts that can be addressed by classical and algorithmic information theory. Several entities in the latter have been defined relying upon notions of the former, such as entropy and mutual information, since the basic concepts of these two approaches present many common tracts. In this work we further expand this parallelism by defining the algorithmic versions of cross-entropy and relative entropy (or Kullback-Leiblerdivergence), two well-known concepts in classical information theory. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when using such a description for x, with respect to its shortest representation. Since the main drawback of these concepts is their uncomputability, a suitable approximation based on data compression is derived for both and applied to real data. This allows us to improve the results obtained by similar previous methods which were intuitively defined.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call