On-ramp merge is a complex traffic scenario in autonomous driving. Because of the uncertainty of the driving environment, most rule-based models cannot solve such a problem. This paper designs a ramp merging decision model based on deep deterministic policy gradient algorithm (DDPG) to solve the vehicle merging problem. To address the problems of slow algorithm merging and poor robustness of previous deep reinforcement learning algorithms in the field of intelligent vehicle ramp merging leading to the low success rate of intelligent vehicle merging, first, we introduce a simple recurrent unit (SRU) for extracting intelligent vehicle states and environment features and use the DDPG algorithm for intelligent vehicle decision making. Second, the experience playback pool of DDPG algorithm is improved by using priority sampling instead of uniform sampling. Finally, a multi-objective reward function is set up during training, considering factors such as safety and efficiency. The simulation experiments show that the improved algorithm improves the merging speed of the model, reduces the collision rate, and enables the vehicle to make more reasonable decisions. In addition, the superiority of the method is demonstrated by comparing with the advanced method.