With the rapid development of Artificial Intelligence, AI-enabled Uncrewed Aerial Vehicles have garnered extensive attention since they offer an accessible and cost-effective solution for executing tasks in unknown or complex environments. However, developing secure and effective AI-based algorithms that empower agents to learn, adapt, and make precise decisions in dynamic situations continues to be an intriguing area of study. This paper proposes a hybrid intelligent control framework that integrates an enhanced Soft Actor–Critic method with a fuzzy inference system, incorporating pre-defined expert experience to streamline the learning process. Additionally, several practical algorithms and approaches within this control system are developed. With the synergy of these innovations, the proposed method achieves effective real-time path planning in unpredictable environments under a model-free setting. Crucially, it addresses two significant challenges in RL: dynamic-environment problems and multi-target problems. Diverse scenarios incorporating actual UAV dynamics were designed and simulated to validate the performance in tracking multiple mobile intruder aircraft. A comprehensive analysis and comparison of methods relying solely on RL and other influencing factors, as well as a controller feasibility assessment for real-world flight tests, are conducted, highlighting the advantages of the proposed hybrid architecture. Overall, this research advances the development of AI-driven approaches for UAV safe autonomous navigation under demanding airspace conditions and provides a viable learning-based control solution for different types of robots.