With the popularity of electric vehicles (EVs), electric vehicle charging scheduling control in the complex urban environment has become a hot research issue, especially the use of multi-agent reinforcement learning (MARL) to solve game problems in the process of EV scheduling. In a complex and unstable environment involving high-speed dynamic changes of vehicles, the traditional methods have poor learning effects. In this paper, a multi-agent charge scheduling control framework with the cooperative vehicle infrastructure system (CVIS) is proposed to solve the game problems between electric vehicle charging stations (EVCSs) and EVs. To achieve effective control of electric vehicle charging stations in large-scale road networks, a new multi-agent A2C algorithm based on a non-cooperative game (NCG-MA2C) is proposed. In the proposed NCG-MA2C algorithm, the state representation is based on the K-nearest neighbor multi-head attention mechanism, the action definition is based on the proposed adaptive service price adjustment strategy, and spatio-temporal discount joint reward stable learning convergence. The results show that the proposed algorithm has good performance in increasing the efficiency of EVCSs while reducing EV charging cost compared with the fixed strategy, the greedy strategy, the independent Q learning (IQL) algorithm, the multi-agent Q-learning algorithm (MAIQL), and the multi-agent deep deterministic policy gradient (MADDPG) algorithm. The NCG-MA2C algorithm has strong extensibility and validity in the complex urban environment.