We study the long-run properties of optimal control problems in continuous time, where the running cost of a control problem is evaluated by a probability measure over $[0,+\infty)$ . Li et al. (“Limit value for optimal control with general means,” Discrete Continuous Dyn. Syst.—Series A , vol. 36, pp. 2113–2132, 2016) introduced an asymptotic regularity condition for a sequence of probability measures to be more and more uniform over $[0,+\infty)$ in order to study the limit properties of the value functions with respect to the evaluation. In the particular case of $t$ -horizon Cesaro mean or $\rho$ -discounted Abel mean, this condition implies that the horizon $t$ tends to infinity or the discount factor $\rho$ tends to zero. For the control problem defined on a compact domain, satisfying some nonexpansive condition, and with a running cost function dependent on the state variable only, Li et al. proved the existence of a general limit value, i.e., the value function uniform converges as the evaluation becomes more and more regular. Within the same context, we prove the existence of a general uniform value, i.e., for any $\varepsilon >0$ , there is a robust optimal control that guarantees the general limit value up to $\varepsilon$ for all control problems where the cost is evaluated by a probability measure sufficiently regular. This extends the result presented by Quincampoix and Renault (“On the existence of a limit value in some nonexpansive optimal control problems,” SIAM J. Control Optim. , vol. 49, pp. 2118–2132, 2011) which proved the existence of a uniform value for running costs evaluated by Cesaro means only. Under the compact nonexpansive condition we make, the obtained limit value is in general a function dependent of the initial state, a property not underlined by the traditional ergodic or dissipative approach for the long-run control problems.