Due to different imaging principles of visible-infrared cameras, there are modal differences between similar person images. For visible-infrared person re-identification (VI-ReID), existing works focus on extracting and aligning cross-modal global descriptors in the shared feature space, while ignoring local variations and graph-structural correlations in cross-modal image pairs. In order to bridge the modal differences with graph-structure between key-parts, a Meta-Graph Isomerization Aggregation Module (MIAM) is proposed, which includes Meta-Graph Node Isomerization Module (MNI) and Dual Aggregation Module (DA). In order to completely describe discriminative in-graph local features, MNI establishes meta-secondary cyclic isomorphism relations of in-graph local features by using a multi-branch embedding generation mechanism. Then, the local features not only contain the limited information of the fixed regions, but also benefit from the neighboring regions. Meanwhile, the secondary node generation process considers similar and different nodes of pedestrian graph structure to reduce the interference of identity differences. In addition, the Dual Aggregation (DA) module combines spatial self-attention and channel self-attention to establish the interdependence of modal heterogeneous graph structure pairs and achieve inter-modal feature aggregation. To match heterogeneous graph structures, a Node Center Joint Mining Loss (NCJM Loss) is proposed to constrain the distance between the node centers of heterogeneous graphs. The experiments performed on the SYSU-MM01, RegDB and LLCM public datasets demonstrate that the proposed method performs excellently in VI-ReID.