Due to its capability in handling complex urban intersection environments, deep reinforcement learning (DRL) has been widely applied in Adaptive Traffic Signal Control (ATSC). However, most existing algorithms are designed for specific road networks or traffic conditions, making it difficult to transfer them to new environments. Moreover, current graph-based algorithms do not fully capture the geometric and spatial features of intersections, leading to incomplete embedding of the agent's environment depiction. Additionally, the actions adopted by these algorithms are inherently based on fixed-cycle phases, limiting the flexibility of traffic signal control. To address the aforementioned issues, this paper proposes a Multi-layer Graph Mask Q-Learning (MGMQ) algorithm for multi-intersection ATSC to optimize traffic and reduce delay. Unlike previous graph-based algorithms, this paper introduces a method for computing multi-layer graphs, dividing the traffic environment into upper-level traffic network-layer graphs and lower-level intersection-layer graphs, and employs the graph attention algorithm and an improved GraphSAGE algorithm for computation. This method not only enables the generation of embedded state for intersections that include geometric and spatial features, but also allows the algorithm to adapt to different traffic conditions and road networks. Additionally, we introduce an action masking mechanism, allowing this algorithm can be adapted to different action spaces. As a result, the algorithm uses arbitrary signal phases as actions to achieve flexible traffic flow control, and can be directly applied to intersections with arbitrary geometry. The final test results demonstrate that a model trained solely on synthetic road networks can be directly transferred to other synthetic network configurations or real-world urban road networks, outperforming current state-of-the-art algorithms.