This paper tries to investigate a differential inclusion optimal control problem with state constraint (P) through an approximation of Hamilton–Jacobi–Bellman viscosity solutions. By invoking a duality result between the control problem (P) and the upper hull of viscosity subsolutions of the HJB equation, we prove that value functions of the discrete problems are solutions of discrete Bellman equations and converge to the value function of (P). We also establish that the optimal trajectory sequence of discrete problems converges to the solution of the optimal control problem (P). The idea behind the approximation is based on Euler polygonal arcs and Subbotin's proximal aiming condition, which allows discrete trajectories to break the state constraint, evolve in its neighbourhood and then converge to it instead of staying in the state constraint set for all time.