ABSTRACT Gradient optimization algorithms can be effectively studied from the perspective of ordinary differential equations (ODEs), where these algorithms are obtained through the temporal discretization of the continuous dynamics. This approach provides a powerful tool to comprehend acceleration phenomena in optimization, thanks to time-scale techniques and Lyapunov analysis. In the context of a Hilbert setting, our study focuses on the rapid optimization properties of inertial dynamics, which combine asymptotically vanishing viscous damping with Hessian-driven damping. These dynamics are derived as high-resolution ODEs from both the Nesterov accelerated gradient method and the Ravine method. For a general differentiable convex function f, choosing the viscous damping coefficient in the form of α / t with 3 $ ]]> α > 3 guarantees an inverse quadratic convergence rate of the values, i.e. o ( 1 / t 2 ) , the weak convergence of trajectories towards optimal solutions, and the fast convergence of gradients towards zero. By carefully scaling the dynamics temporally and based on the tuning of the damping coefficient in front of the Hessian term, we identify the limit dynamic as α becomes large. In particular, we examine the case where the limit dynamic corresponds to the Levenberg-Marquardt regularization of Newton's continuous method. This explanation accounts for the aforementioned fast convergence properties and provides fresh insights into the complexity of these methods, which play a central role in optimization for high-dimensional problems. The interplay between numerical algorithms and continuous-time ODEs plays a fundamental role in our analysis for the design and understanding of accelerated optimization algorithms. Numerical experiments are presented to illustrate and support the theoretical results.