In the realm of air combat, autonomous decision-making in regard to Unmanned Aerial Vehicle (UAV) has emerged as a critical force. However, prevailing autonomous decision-making algorithms in this domain predominantly rely on rule-based methods, proving challenging to design and implement optimal solutions in complex multi-UAV combat environments. This paper proposes a novel approach to multi-UAV air combat decision-making utilizing hierarchical reinforcement learning. First, a hierarchical decision-making network is designed based on tactical action types to streamline the complexity of the maneuver decision-making space. Second, the high-quality combat experience gained from training is decomposed, with the aim of augmenting the quantity of valuable experiences and alleviating the intricacies of strategy learning. Finally, the performance of the algorithm is validated using the advanced UAV simulation platform JSBSim. Through comparisons with various baseline algorithms, our experiments demonstrate the superior performance of the proposed method in both even and disadvantaged air combat environments.