In this study, we studied the Unmanned System Group (USG) Autonomous Collaborative Combat Strategy (ACCS) and the Parallel Decoupling-Multi-agent Deep Deterministic Policy Gradient (PD-MADDPG) algorithm was proposed. This was conducted on the background of USG confrontation game problem between ourselves and the enemy in the continuous dynamic environment of future aerial combat. An independent Parallel Benchmark Critic (PB-Critic) network and Parallel Decoupling Critic (PD-Critic) network for each member of USG was developed to maximize the whole reward and member reward of USG under time parallelism. By integrating Forerunner Mechanism (FM) into the parallel decoupling reward function, the reward sparse problem confronted by parallel decoupling reward function at the beginning of the algorithmic training was controlled as well as the convergence efficiency of USG in the continuous dynamic environment of future aerial combat. Introduction of Symmetric Attention Mechanism (SAM) to the Critic-network and Actor-network shortened the screening radius of helpful information related to the confrontation game of USG of both sides of ourselves and enemy in the continuous dynamic environment. Assuming the multiple typical confrontation game control strategies of USG of the enemy in the continuous dynamic environment, the effectiveness and feasibility of the strategies were simulated and verified. The PD-MADDPG improved the multiple inadequate natural endowments of Multi-Agent Deep Deterministic Policy Gradient (MADDPG) which showed that the policy training convergence and execution stability were greatly enhanced, and USG’s behavioral autonomy for collaborative combat in the continuous dynamic environment was further improved.