Abstract

How to measure the complexity of a finite set of vectors embedded in a multidimensional space? This is a non-trivial question which can be approached in many different ways. Here we suggest a set of data complexity measures using universal approximators, principal cubic complexes. Principal cubic complexes generalize the notion of principal manifolds for datasets with non-trivial topologies. The type of the principal cubic complex is determined by its dimension and a grammar of elementary graph transformations. The simplest grammar produces principal trees.We introduce three natural types of data complexity: (1) geometric (deviation of the data’s approximator from some “idealized” configuration, such as deviation from harmonicity); (2) structural (how many elements of a principal graph are needed to approximate the data), and (3) construction complexity (how many applications of elementary graph transformations are needed to construct the principal object starting from the simplest one).We compute these measures for several simulated and real-life data distributions and show them in the “accuracy–complexity” plots, helping to optimize the accuracy/complexity ratio. We discuss various issues connected with measuring data complexity. Software for computing data complexity measures from principal cubic complexes is provided as well.

Highlights

  • Rapid development of computer-based technologies in many areas of science, including physics, molecular biology, environmental research led to appearance of large datasets that are characterized as “Big Data” [1]

  • On the abscissa of the plot we show the Fraction of Variance Explained (FVE), i.e. a unity minus ratio between the Mean Squared Error and the total data variance

  • Given the type of the approximator, one can estimate its complexity with respect to a dataset by looking at the “accuracy-complexity” plot: the optimal approximator will correspond to such a point where the further increase of accuracy leads to the drastic increase of complexity

Read more

Summary

Introduction

Rapid development of computer-based technologies in many areas of science, including physics, molecular biology, environmental research led to appearance of large datasets that are characterized as “Big Data” [1]. There is a tremendous challenge in how to store, analyze, query and visualize the Big Data. It is frequently said that the problem of the Big Data is that it is big and that it is complex. It would be useful to define what “complex data” means and be able to measure the complexity. This study is devoted to an attempt to define a way to measure some particular aspects of data complexity, connected to the data’s geometry

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call