AbstractIn this article, we study a stochastic optimal control problem in the pathwise sense, as initially proposed by Lions and Souganidis in [C. R. Acad. Sci. Paris Ser. I Math., 327 (1998), pp. 735-741]. The corresponding Hamilton-Jacobi-Bellman (HJB) equation, which turns out to be a non-adapted stochastic partial differential equation, is analyzed. Making use of the viscosity solution framework, we show that the value function of the optimal control problem is the unique solution of the HJB equation. When the optimal drift is defined, we provide its characterization. Finally, we describe the associated conserved quantities, namely the space-time transformations leaving our pathwise action invariant.