Graph Neural Networks (GNNs) play a key role in efficiently learning node representations of graph-structured data through message passing, but their predictions are often correlated with sensitive attributes and thus lead to potential discrimination against some groups. Given the increasingly widespread applications of GNNs, solutions are required urgently to prevent algorithmic discrimination associated with GNNs, to protect the rights of vulnerable groups and to build trustworthy artificial intelligence. To learn the fair node representations of graphs, we propose a novel framework, the Fair Disentangled Graph Neural Network (FDGNN). With the proposed FDGNN framework, we enhance data diversity by generating instances that have identical sensitivity values but different adjacency matrices through data augmentation. Additionally, we design a counterfactual augmentation strategy for constructing instances with varying sensitive values while preserving the same adjacency matrices, thereby balancing the distribution of sensitive values across different groups. Subsequently, we employ a disentangled contrastive learning strategy to acquire disentangled representations of non-sensitive attributes such that sensitive information does not affect the prediction of node information. Finally, the learned fair representations of non-sensitive attributes are employed for building a fair predictive model. Extensive experiments on three real-world datasets demonstrate that FDGNN yields the best fairness predictions compared to the baseline methods. Additionally, the results demonstrate the potential of disentanglement in learning fair representations.