This paper presents an adaptive backstepping approach to distributed optimization for a class of nonlinear multi-agent systems with each agent represented by the parametric strict-feedback form. In particular, this paper does not assume known gradient functions of the local objective functions, and uses the measured gradient values depending on the agents’ real-time outputs instead. A stepwise method is presented to derive novel distributed adaptive optimization algorithms that steer the outputs of all the agents to the optimal solution of the total objective function. First, a novel distributed adaptive optimization algorithm is developed for first-order nonlinear uncertain multi-agent systems, supported by stability analysis and convergence proofs using Lyapunov arguments. Second, by means of Lyapunov arguments in the spirit of backstepping, a distributed adaptive optimization algorithm is presented for high-order strict-feedback systems with parametric uncertainty. Interesting extensions of the main result to practically important classes of systems with unknown virtual control coefficients, output feedback, and relative-measurement feedback are also discussed.