Graph Neural Networks (GNNs) can effectively leverage information from graph data and apply to various downstream tasks, such as recommendation system, or anomaly detection. Previous studies have shown that node-level GNNs are susceptible to membership inference attacks, potentially leaking private information about the nodes. However, current studies on defending against such attacks primarily focuses on image and text data. They may not be directly applicable to graph data. In addition, classic defense methods such as differential privacy, regularization techniques, and adversarial training. They may reduce model availability or require model retraining while protecting privacy. To deal with these problems, we propose a novel defense strategy against membership inference attacks in graph neural networks. The strategy involves augmenting the graph with additional nodes and modifying its topology to create a new privacy-preserving graph, thereby protecting the privacy of the original nodes. We conducted extensive experiments on three representative GNN models and compared them with state-of-the-art baseline methods to support our research. The experimental results demonstrate that our method significantly reduces the success rate of membership inference attacks while maintaining the basic performance of the target model.