This study evaluates the computational performance of five nonlinear conjugate gradient methods—Polak–Ribière–Polyak, Dai–Yuan, Hager–Zhang, Wei–Yao–Liu, and Rivaie–Mustafa–Ismail–Leong—using the strong Wolfe line search to solve unconstrained optimization problems. The algorithms were implemented in Scilab and tested on a diverse set of benchmark functions, including problems from the CUTE library. Performance was assessed based on the number of iterations and CPU time, with results analyzed using Moré and Dolan's performance profiles. Among the tested methods, the Hager–Zhang algorithm demonstrated superior efficiency and robustness, particularly for high-dimensional problems, due to its innovative conjugacy condition and descent property. The study emphasizes the critical role of strong Wolfe's line search in ensuring convergence and highlights how appropriate initialization of step lengths significantly impacts performance. While Hager–Zhang consistently outperformed other methods, Wei–Yao–Liu, Dai–Yuan and Polak–Ribière–Polyak also exhibited competitive results in reducing gradient norms. In contrast, the Rivaie–Mustafa–Ismail–Leong method often failed to converge under certain conditions. This work provides valuable insights into the practical application of nonlinear conjugate gradient methods for unconstrained optimization across various dimensions. The findings underline the importance of selecting robust algorithms and effective line search strategies to enhance computational efficiency in large-scale optimization tasks.
Read full abstract