Machine learning has permeated almost all areas in which inferences are drawn from financial data. Nevertheless, in financial market risk measurement most machine learning techniques struggle with some inherent difficulties: Financial time series are very noisy, not stationary and mostly considerably short. This paper contains an easy to implement sequential learning algorithm that overcomes some of these disadvantages. It is based on a Kalman filtering mechanism for quite general stochastic processes and provides a first step in the direction of separating parameter dynamics from the ubiquitous noise component. The core idea here is to use some stylised facts inherent to financial markets time series such as time varying measures of volatility. The new approach is tested using real market data in two different settings. First, a hypothetical portfolio containing credit spread and equity risk is analysed over a time frame containing the outbreak of the global pandemic in 2020 and the beginning of the Russian attack on Ukraine in 2022. Another analysis is focused on US$/EUR exchange rate during a time span containing the global financial crisis of 2008 and the subsequent European sovereign crisis. In all test calculations the proposed sequential learning algorithm performs better than the historical simulation approach used by many firms in the banking industry to meet regulatory capital requirements. Due to its simplicity this method has a high degree of explainability and interpretability which will decrease the inherent model risk. The paper concludes with a discussion of model risk for machine learning in financial institutions. Compared to classical model risk frameworks, the emphasis must be put on the more prominent role of data. The simple approach described in this paper shows that machine learning in financial market risk does not have to get lost in noise.
Read full abstract