ABSTRACTWe study a two‐player discounted zero‐sum stochastic game model for dynamic operational planning in military campaigns. At each stage, the players manage multiple commanders who order military actions on objectives that have an open line of control. When a battle over the control of an objective occurs, its stochastic outcome depends on the actions and the enabling support provided by the control of other objectives. Each player aims to maximize the cumulative number of objectives they control, weighted by their criticality. To solve this large‐scale stochastic game, we derive properties of its Markov perfect equilibria by leveraging the logistics and military operational command and control structure. We show the consequential isotonicity of the optimal value function with respect to the partially ordered state space, which in turn leads to a significant reduction of the state and action spaces. We also accelerate Shapley's value iteration algorithm by eliminating dominated actions and investigating pure equilibria of the matrix game solved at each iteration. We demonstrate the computational value of our equilibrium results on a case study that reflects representative operational‐level military campaigns with geopolitical implications. Our analysis reveals a complex interplay between the game's parameters and dynamics in equilibrium, resulting in new military insights for campaign analysts.