Abstract

<p style='text-indent:20px;'>Computing the gradient of a function provides fundamental information about its behavior. This information is essential for several applications and algorithms across various fields. One common application that requires gradients are optimization techniques such as stochastic gradient descent, Newton's method and trust region methods. However, these methods usually require a numerical computation of the gradient at every iteration of the method which is prone to numerical errors. We propose a simple limited-memory technique for improving the accuracy of a numerically computed gradient in this gradient-based optimization framework by exploiting (1) a coordinate transformation of the gradient and (2) the history of previously taken descent directions. The method is verified empirically by extensive experimentation on both test functions and on real data applications. The proposed method is implemented in the <inline-formula><tex-math id="M1">\begin{document}$\texttt{R} $\end{document}</tex-math></inline-formula> package <inline-formula><tex-math id="M2">\begin{document}$ \texttt{smartGrad}$\end{document}</tex-math></inline-formula> and in C<inline-formula><tex-math id="M3">\begin{document}$ \texttt{++} $\end{document}</tex-math></inline-formula>.</p>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call