Gradient descent is a popular method in optimization. This method can be used in the case of a derivable and without constraint function. But it encountered several obstacles (the choice of initial point and the type of objective function, the solution, global or local). In this work, we propose an optimization method that is used to calculate the global optimization of mono-objective optimization functions that can be derived without constraints. This method is inspired by the gradient descent method. Their principle is to use a new entity based on the gradient direction instead of gradient. Unlike the gradient method and other methods inspired by gradient descent (conjugate gradient method and Newton method) where the choice of initial point is made in a random way. In our case, the choice of initial point is fixed at the beginning and also the form of the objective function curve is not a problem. Finally, we will end by applying our method to some examples (nonlinear function, quadratic function (not-convex, convex).