This work concentrates on the initial introduction of parallel control to investigate an optimal consensus control strategy for continuous-time nonlinear multiagent systems (MASs) via adaptive dynamic programming (ADP). First, the control input is integrated into the feedback system for parallel control, facilitating an augmented system's optimal consensus control with an appropriate augmented performance index function to be established, which is identical to the original system's suboptimal control with a conventional performance index. Second, the feasibility of the proposed control scheme is evaluated based on the policy iteration algorithm, and the convergence of the algorithm is demonstrated. Then, an online learning algorithm becomes available to implement the ADP-based optimal parallel consensus control protocol without prior knowledge of the system. The Lyapunov approach is employed to indicate that the signals are convergent. Ultimately, the experimental data support the theoretical results.