Abstract

Generally, all artificial intelligence (AI) algorithms can be eventually transformed into optimization problems for solving loss functions, and there are two common types of loss functions: mean squared error (MSE) and mean absolute error (MAE), which have their own advantages and disadvantages. The error between the data reflected by MAE is more accurate and stable, but its derivative is discontinuous and non differentiable, so its efficiency in solution process is low; MSE is differentiable, but it has the tend to enlarge and reduce the data error. The popular method in the field of AI is to use Huber loss function, which combines the advantages of both MAE and MSE, as a framework. Huber loss function is usually solved by gradient descent (GD) method or stochastic gradient descent (SGD) method, but these methods are easy to fall into local optimum and cannot find the global optimal solution. In this paper, a t-distribution Yin-Yang-Pair optimization algorithm has been proposed, such algorithm does not depend on gradient information and is easier to find the global optimal solution for Huber loss function, Via numerical experiment, the effectiveness of the proposed algorithm has been verified.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call