The fractional-order gradient descent (FOGD) method has been employed by numerous scholars in Artificial Neural Networks (ANN), with its superior performance validated both theoretically and experimentally. However, current FOGD methods only apply fractional-order differentiation to the loss function. The application of FOGD based on Autograd to hidden layers leverages the characteristics of fractional-order differentiation, significantly enhancing its flexibility. Moreover, the implementation of FOGD in the hidden layers serves as a necessary foundation for establishing a family of fractional-order deep learning optimizers, facilitating the widespread application of FOGD in deep learning. This paper proposes an improved fractional-order gradient descent (IFOGD) method based on Multilayer Perceptron (MLP). Firstly, a fractional matrix differentiation algorithm and its fractional matrix differentiation solver is proposed based on MLP, ensuring that IFOGD can be applied within the hidden layers. Subsequently, we overcome the issue of incorrect backpropagation direction caused by the absolute value symbol, ensuring that the IFOGD method does not cause divergence in the value of the loss function. Thirdly, fractional-order Autograd (FOAutograd) is proposed based on PyTorch by reconstructing Linear layer and Mean Squared Error Loss module. By combining FOAutograd with first-order adaptive deep learning optimizers, parameter matrices in each layer of ANN can be updated using fractional-order gradients. Finally, we compare and analyze the performance of IFOGD with other methods in simulation experiments and time series prediction tasks. The experimental results demonstrate that the IFOGD method exhibits performances.
Read full abstract