To study optimal control and disturbance attenuation problems, two prominent-and somewhat alternative-strategies have emerged in the last century: dynamic programming (DP) and Pontryagin's minimum principle (PMP). The former characterizes the solution by shaping the dynamics in a closed loop (a priori unknown) via the selection of a feedback input, at the price, however, of the solution to (typically daunting) partial differential equations. The latter, instead, provides (extended) dynamics that must be satisfied by the optimal process, for which boundary conditions (a priori unknown) should be determined. The results discussed in this article combine the two approaches by matching the corresponding trajectories, i.e., combining the underlying sources of information: knowledge of the complete initial condition from DP and of the optimal dynamics from PMP. The proposed approach provides insights for linear as well as nonlinear systems. In the case of linear systems, the derived conditions lead to matrix algebraic equations, similar to the classic algebraic Riccati equations (AREs), although with coefficients defined as polynomial functions of the input gain matrix, with the property that the coefficient of the quadratic term of such equation is sign definite, even if the corresponding coefficient of the original ARE is sign indefinite, as it is typically the case in the H <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">∞</sub> control problem. This feature is particularly appealing from the computational point of view, since it permits the use of standard minimization techniques for convex functions, such as the gradient algorithm. In the presence of nonlinear dynamics, the strategy leads to algebraic equations that allow us to (locally) construct the optimal feedback by considering the behavior of the closed-loop dynamics at a single point in the state space.