In this paper, we study discounted stochastic games with Borel state and compact action spaces depending on the state variable. The primitives of our model satisfy standard continuity and measurability conditions. The transition probability is a convex combination of finitely many probability measures depending on states, and it is dominated by some finite measure on the state space. The coefficients of the combination depend on both states and action profiles. This class of models contains stochastic games with Borel state spaces and finite, state-dependent action sets. Our main result establishes the existence of subgame perfect equilibria, which are stationary in the sense that the equilibrium strategy for each player is determined by a single function of the current and previous states of the game. This dependence is called almost Markov. Our result enhances both the theorem of Mertens and Parthasarathy established in 1991 for games with finite, state-independent action sets, where the equilibrium strategies were also depended on the calendar time, and their result on stationary equilibria proved under an additional condition that the transition probabilities are atomless. A counterexample given very recently by Levy shows that stationary Markov perfect equilibria may not exist in the class of games considered in this paper. The presented results in this work are illustrated by the Cournot dynamic games, which were already considered in the literature under much stronger assumptions.