This paper presents a drone system that uses an improved network topology and MultiAgent Reinforcement Learning (MARL) to enhance mission performance in Unmanned Aerial Vehicle (UAV) swarms across various scenarios. We propose a UAV swarm system that allows drones to efficiently perform tasks with limited information sharing and optimal action selection through our Efficient Self UAV Swarm Network (ESUSN) and reinforcement learning (RL). The system reduces communication delay by 53% and energy consumption by 63% compared with traditional MESH networks with five drones and achieves a 64% shorter delay and 78% lower energy consumption with ten drones. Compared with nonreinforcement learning-based systems, mission performance and collision prevention improved significantly, with the proposed system achieving zero collisions in scenarios involving up to ten drones. These results demonstrate that training drone swarms through MARL and optimized information sharing significantly increases mission efficiency and reliability, allowing for the simultaneous operation of multiple drones.