Abstract

Measures of signal complexity, such as the Hurst exponent, the fractal dimension, and the Spectrum of Lyapunov exponents, are used in time series analysis to give estimates on persistency, anti-persistency, fluctuations and predictability of the data under study. They have proven beneficial when doing time series prediction using machine and deep learning and tell what features may be relevant for predicting time-series and establishing complexity features. Further, the performance of machine learning approaches can be improved, taking into account the complexity of the data under study, e.g., adapting the employed algorithm to the inherent long-term memory of the data. In this article, we provide a review of complexity and entropy measures in combination with machine learning approaches. We give a comprehensive review of relevant publications, suggesting the use of fractal or complexity-measure concepts to improve existing machine or deep learning approaches. Additionally, we evaluate applications of these concepts and examine if they can be helpful in predicting and analyzing time series using machine and deep learning. Finally, we give a list of a total of six ways to combine machine learning and measures of signal complexity as found in the literature.

Highlights

  • The rise of artificial intelligence resulted in increased use of machine and deep learning for analysis and predictions based on historical data instead of employing mechanistic expert models

  • The results show that the predictions could be improved by adding effective transfer entropy (ETE) to the input and, second, that long short term memory (LSTM) and the multilayer perceptron (MLP) perform best for this task

  • We find the following publications: Ref [19]: In this research, the Hurst exponent was used to show long term memory in time series data as an argument for its predictability when using artificial neural networks for stock market data prediction

Read more

Summary

Introduction

The rise of artificial intelligence resulted in increased use of machine and deep learning for analysis and predictions based on historical data instead of employing mechanistic expert models. The outcomes of these predictions are encouraging, e.g., in solid-state physics, first-principle calculations can be sped up [1], or solar radiation can be predicted using machine learning methods [2]. Machine learning can be applied, e.g., to genomics, proteomics, and evolution [3]. Machine learning can be used to predict yields and give estimates on the nitrogen status [6]

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call