Functional connectivity (FC) networks based on resting-state functional magnetic imaging (rs-fMRI) are reliable and sensitive for brain disorder diagnosis. However, most existing methods are limited by using a single template, which may be insufficient to reveal complex brain connectivities. Furthermore, these methods usually neglect the complementary information between static and dynamic brain networks, and the functional divergence among different brain regions, leading to suboptimal diagnosis performance. To address these limitations, we propose a novel multi-graph cross-attention based region-aware feature fusion network (MGCA-RAFFNet) by using multi-template for brain disorder diagnosis. Specifically, we first employ multi-template to parcellate the brain space into different regions of interest (ROIs). Then, a multi-graph cross-attention network (MGCAN), including static and dynamic graph convolutions, is developed to explore the deep features contained in multi-template data, which can effectively analyze complex interaction patterns of brain networks for each template, and further adopt a dual-view cross-attention (DVCA) to acquire complementary information. Finally, to efficiently fuse multiple static-dynamic features, we design a region-aware feature fusion network (RAFFNet), which is beneficial to improve the feature discrimination by considering the underlying relations among static-dynamic features in different brain regions. Our proposed method is evaluated on both public ADNI-2 and ABIDE-I datasets for diagnosing mild cognitive impairment (MCI) and autism spectrum disorder (ASD). Extensive experiments demonstrate that the proposed method outperforms the state-of-the-art methods. Our source code is available at https://github.com/mylbuaa/MGCA-RAFFNet.