Abstract

The increasingly sophisticated investigations of complex systems require more robust estimates of the correlations between the measured quantities. The traditional Pearson correlation coefficient is easy to calculate but sensitive only to linear correlations. The total influence between quantities is, therefore, often expressed in terms of the mutual information, which also takes into account the nonlinear effects but is not normalized. To compare data from different experiments, the information quality ratio is, therefore, in many cases, of easier interpretation. On the other hand, both mutual information and information quality ratio are always positive and, therefore, cannot provide information about the sign of the influence between quantities. Moreover, they require an accurate determination of the probability distribution functions of the variables involved. As the quality and amount of data available are not always sufficient to grant an accurate estimation of the probability distribution functions, it has been investigated whether neural computational tools can help and complement the aforementioned indicators. Specific encoders and autoencoders have been developed for the task of determining the total correlation between quantities related by a functional dependence, including information about the sign of their mutual influence. Both their accuracy and computational efficiencies have been addressed in detail, with extensive numerical tests using synthetic data. A careful analysis of the robustness against noise has also been performed. The neural computational tools typically outperform the traditional indicators in practically every respect.

Highlights

  • Associazione EURATOM-ENEA, University of Rome “Tor Vergata”, 00133 Rome, Italy; These authors contributed to the work

  • A very common and computationally efficient indicator to quantify the linear correlations between variables is the Pearson correlation coefficient (PCC)

  • In the case of investigations involving highly nonlinear phenomena, as it is often the case in the science of complex systems, the conclusions obtained from the analysis of the PCC can, be highly misleading

Read more

Summary

Quantifying the Influence between Variables

Causality is an essential element of human cognition and is typically the final goal of most scientific enterprises. When analyzing cross-sectional data, even the preliminary stage of determining the correlation between quantities can become a very challenging task This is true in the investigation of complex systems in the presence of overwhelming amounts of data. The most widely used tools to determine the relation between variables present some significant limitations; they either detect only the linear correlations or require large amounts of data and do not provide any hint about the directionality of the influence (see Section 2). In this contribution, it is explored to what extent specific neural networks can help in at least alleviating some of these insufficiencies. Conclusions and lines of future investigation are the subject of the last section of the paper

Correlation and Mutual Information between Variables
General
Architecture multiplied to give the matrix
Extension derivation the matrix
2: Gaussian noise with standard
Numerical Tests for Total Correlations
Comparison of of thethe ρintρint and thethe forfor the negative
Numerical
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call