In this article, an expert system-based multiagent deep deterministic policy gradient (ESB-MADDPG) is proposed to realize the decision making for swarm robots. Multiagent deep deterministic policy gradient (MADDPG) is a multiagent reinforcement learning algorithm proposed to utilize a centralized critic within the actor-critic learning framework, which can reduce policy gradient variance. However, it is difficult to apply traditional MADDPG to swarm robots directly as it is time consuming during the path planning, rendering it necessary to propose a faster method to gather the trajectories. Besides, the trajectories obtained by the MADDPG are continuous by straight lines, which is not smooth and will be difficult for the swarm robots to track. This article aims to solve these problems by closing the above gaps. First, the ESB-MADDPG method is proposed to improve the training speed. The smooth processing of the trajectory is designed in the ESB-MADDPG. Furthermore, the expert system also provides us with many trained offline trajectories, which avoid the retraining each time we use the swarm robots. Considering the gathered trajectories, the model predictive control (MPC) algorithm is introduced to realize the optimal tracking of the offline trajectories. Simulation results show that combining ESB-MADDPG and MPC can realize swarm robot decision making efficiently.