This paper discusses a two person zero sum differential game, as treated, for example, in [4-7). It is known that, if the Isaac’s condition is not satisfied, there is not a single concept of value for the differential game, and several definitions of value are given in the literature, of which the upper value and lower value are the most important. For the upper value the maximizing player has the advantage of knowing instantaneously, at each time t, the value of the control chosen by his opponent at that time. This is obtained by a non-anticipative function, which we call a strategy, from the opponent’s control functions to his own control functions. If the minimizing player knows what strategy the maximizing player will employ he can determine the optimal control function that he should use. We see that, in this formulation, the maximizing player should choose his strategy first, and the questions investigated in this paper are the properties of optimal, or almost optimal, strategies. Even in the stochastic case, where things are sometimes smoother, and which was investigated in 131, the calculations are very delicate. The paper proceeds by first showing that the set of strategies is a complete metric space under a certain metric, and by then applying the non-convex minimization result of Ekeland 121. The dynamic programming result of 161 is next recalled. By declaring his strategy in advance the maximizing player is certainly giving his opponent information about his future behaviour, but, by using the dynamic programming identities, games played over shorter and shorter time intervals are considered. In these, the maximizing player gives less and less information about his future play. If the dynamics and payoff satisfy Lipschitz conditions, the upper value function is differentiable almost everywhere. At points of differentiability it is shown that an e-optimal strategy should, on average, almost maximize the Hamiltonian.