Unmanned Aerial Vehicle (UAV) stands as a burgeoning electric transportation carrier, holding substantial promise for the logistics sector. A reinforcement learning framework Centralized - S Proximal Policy Optimization (C-SPPO) based on centralized decision process and considering policy entropy (S) is proposed. The proposed framework aims to plan the best scheduling scheme with the objective of minimizing both the timeout of order requests and the flight impact of UAVs that may lead to conflicts. In this framework, the intents of matching act are generated through the observations of UAV agents, and the ultimate conflict-free matching results are output under the guidance of a centralized decision maker. Concurrently, a pre-activation operation is introduced to further enhance the cooperation among UAV agents. Simulation experiments based on real-world data from New York City are conducted. The results indicate that the proposed C-SPPO outperforms the baseline algorithms in the Average Delay Time (ADT), the Maximum Delay Time (MDT), the Order Delay Rate (ODR), the Average Flight Distance (AFD), and the Flight Impact Ratio (FIR). Furthermore, the framework demonstrates scalability to scenarios of different sizes without requiring additional training.