We make connections between complexity of training of physics-informed neural networks (PINNs) and Kolmogorov n-width of the solution. Leveraging this connection, we then propose Lagrangian PINNs (LPINNs) as a partial differential equation (PDE)-informed solution for convection-dominated problems. PINNs employ neural-networks to find the solutions of PDE-constrained optimization problems with initial conditions and boundary conditions as soft or hard constraints. These soft constraints are often blamed to be the sources of the complexity in the training phase of PINNs. Here, we demonstrate that the complexity of training (i) is closely related to the Kolmogorov n-width associated with problems demonstrating transport, convection, traveling waves, or moving fronts, and therefore becomes apparent in convection-dominated flows, and (ii) persists even when the boundary conditions are strictly enforced. Given this realization, we describe the mechanism underlying the training schemes such as those used in eXtended PINNs (XPINN), curriculum learning, and sequence-to-sequence learning. For an important category of PDEs, i.e., governed by non-linear convection–diffusion equation, we propose reformulating PINNs on a Lagrangian frame of reference, i.e., LPINNs, as a PDE-informed solution. A parallel architecture with two branches is proposed. One branch solves for the state variables on the characteristics, and the second branch solves for the low-dimensional characteristics curves. The proposed architecture conforms to the causality innate to the convection, and leverages the direction of travel of the information in the domain, i.e., on the characteristics. This approach is unique as it reduces the complexity of convection-dominated PINNs at the PDE level, instead of optimization strategies and/or schedulers. Finally, we demonstrate that the loss landscapes of LPINNs are less sensitive to the so-called “complexity” of the problems, i.e., convection, compared to those in the traditional PINNs in the Eulerian framework.