This study explores the application of machine learning methods to enhance economic recession prediction in the UK and USA, considering the limitations of traditional methods. Various models, including Logistic Regression, Linear Discriminant Analysis, K Nearest Neighbors, Decision Tree Classifier, Gaussian Naive Bayes, Support Vector Classifier, Neural Network, RTC, Long Short-Term Memory, Convolutional Neural Network, and XGBoost, were assessed using economic data since 1900. The UK data encompassed GDP, unemployment rate, inflation, FTSE 100 index, yield curve, and debt levels, while the USA utilized the 50-day simple moving average of 10-year treasury rates minus the 50-day simple moving average of 3-month treasury rates. Performance evaluation involved averaged F1, recall, and accuracy over 100 iterations, with confusion matrices illustrating model predictions against actual events. Long Short-Term Memory excelled with recall and F1 values of 0.96 and 0.97, accurately identifying 11 in 12 Positive USA events. K Nearest Neighbours, Decision Tree Classifier, Random Forest Classifier, and XGBoost demonstrated good results, with recall ranging from 0.99 to 0.75, F1 from 1.0 to 0.69, and correctly identifying 2 in 3 Positive events. Conversely, Logistic Regression, Gaussian Naive Bayes, and Neural Network exhibited less reliable predictions. Linear Discriminant Analysis, Support Vector Classifier, and Convolutional Neural Network were completely inadequate. Using recent data, most models predicted the USA avoiding recession in 2023-24, but the probability increased to 0.5 by mid-2023, then decreased. Logistical Regression, Linear Discriminant Analysis, and Long Short-Term Memory initially predicted no recession, but the probability rapidly increased to between 0.83 and 0.97 by April 2024. While recession avoidance is plausible, modelling indicates an escalating risk. The results underscore the utility of machine learning in recession prediction, emphasizing the importance of diverse training datasets. Algorithmic performance varied, with neural network models, particularly Long Short-Term Memory and XGBoost, proving most accurate. Further enhancements in performance necessitate refining training datasets and leveraging advanced models like Long Short-Term Memory.
Read full abstract