Abstract

In this paper, the current variant technique of the stochastic gradient descent (SGD) approach, namely, the adaptive moment estimation (Adam) approach, is improved by adding the standard error in the updating rule. The aim is to fasten the convergence rate of the Adam algorithm. This improvement is termed as Adam with standard error (AdamSE) algorithm. On the other hand, the mean-variance portfolio optimization model is formulated from the historical data of the rate of return of the S&P 500 stock, 10-year Treasury bond, and money market. The application of SGD, Adam, adaptive moment estimation with maximum (AdaMax), Nesterov-accelerated adaptive moment estimation (Nadam), AMSGrad, and AdamSE algorithms to solve the mean-variance portfolio optimization problem is further investigated. During the calculation procedure, the iterative solution converges to the optimal portfolio solution. It is noticed that the AdamSE algorithm has the smallest iteration number. The results show that the rate of convergence of the Adam algorithm is significantly enhanced by using the AdamSE algorithm. In conclusion, the efficiency of the improved Adam algorithm using the standard error has been expressed. Furthermore, the applicability of SGD, Adam, AdaMax, Nadam, AMSGrad, and AdamSE algorithms in solving the mean-variance portfolio optimization problem is validated.

Highlights

  • The application of the stochastic gradient descent (SGD) approach to machine learning and deep learning is actively explored

  • The mean-variance portfolio optimization problem [4], which deals with risk and return, has attracted the attention of the investment community. e optimal decision on the portfolio selection is necessarily needed, where the scientific approach is employed in maximizing the return with the minimum risk [5]

  • The standard error from the sampling theory is added to the updating rule of the adaptive moment estimation (Adam) algorithm [8], which is the current variant of the SGD approach

Read more

Summary

Introduction

The application of the stochastic gradient descent (SGD) approach to machine learning and deep learning is actively explored. The disadvantage of the SGD approach, which is the slow convergence [6, 7], is noticed To improve this weakness, the standard error from the sampling theory is added to the updating rule of the adaptive moment estimation (Adam) algorithm [8], which is the current variant of the SGD approach. The application of SGD methods, including Adam, adaptive moment estimation with maximum (AdaMax), Nesterov-accelerated adaptive moment estimation (Nadam), AMSGrad, and AdamSE approaches, for solving the mean-variance portfolio optimization problem is further studied For this purpose, the historical data of the rate of return for the S&P 500 stock, 10-year Treasury bond, and money market are employed.

Problem Description
Stochastic Optimization Method
Findings
Illustrative Example
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call