The optimization and control of traffic signals is very important for logistics transportation. It not only improves the operational efficiency and safety of road traffic, but also conforms to the direction of the intelligent, green, and sustainable development of modern cities. In order to improve the optimization effect of traffic signal control, this paper proposes a traffic signal optimization method based on deep reinforcement learning and Simulation of Urban Mobility (SUMO) software for urban traffic scenarios. The intersection training scenario was established using SUMO micro traffic simulation software, and the maximum vehicle queue length and vehicle queue time were selected as performance evaluation indicators. In order to be more relevant to the real environment, the experiment uses Weibull distribution to simulate vehicle generation. Since deep reinforcement learning takes into account both perceptual and decision-making capabilities, this study proposes a traffic signal optimization control model based on the deep reinforcement learning Deep Q Network (DQN) algorithm by considering the realism and complexity of traffic intersections, and first uses the DQN algorithm to train the model in a training scenario. After that, the G-DQN (Grouping-DQN) algorithm is proposed to address the problems that the definition of states in existing studies cannot accurately represent the traffic states and the slow convergence of neural networks. Finally, the performance of the G-DQN algorithm model was compared with the original DQN algorithm model and Advantage Actor-Critic (A2C) algorithm model. The experimental results show that the improved algorithm increased the main indicators in all aspects.