Decision making under uncertainty is often viewed as an optimization problem under choice criterium, but its calibration raises the inverse problem to recover the criterium from the data. An example is the theory of preference by Samuelson in the 40s, \cite{samuelson1938}. The observable is a so-called increasing characteristic process $\mbX=(\scX_t(x))$ and the objective is to recover a dynamic stochastic utility $\bfU$ in the sense where $U(t,\scX_t(x))$ is a martingale. A linearized version is provided by the first order conditions $U_x(t,\scX_t(x))=Y_t(u_z(x)$, and the additional martingale conditions of the processes $\scX_x(t,x)Y_t(u_z(x))$ and $\scX_t(x)Y_t(u_z(x))$. When $\mbX$ and $\bfY$ are regular solutions of two SDEs with random coefficients, under strongly martingale condition, any revealed utility is solution of a non linear SPDE, and is the stochastic value function of some optimization problem. More interesting is the dynamic equilibrium problem as in He and Leland \cite{HL}, where $Y$ is coupled with $\scX$ so that the monotonicity of $Y_t(z,u_z(z)) $ is lost. Nevertheless, we solve the He \& Leland problem (in random environment), by characterizing all the equilibria: the adjoint process still linear in $y$ (GBM in Markovian case) and the conjugate utilities are a deterministic mixture of stochastic dual power utilities. Besides, the primal utility is the value function of an optimal wealth allocation in the Pareto problem.