Graph neural network entails massive labeled samples for training, and manual labeling generally requires unaffordable cost. Active learning has emerged as a promising approach to selecting a smaller set of informative labeled samples to improve model performance. However, few active learning techniques for graph data account for the cluster structure and redundancy of samples. To address these issues, we propose an approach that employs uncertain information as an observation for a reinforcement learning agent to adaptively learn a node selection policy. We construct states using node information obtained via mutual information, which considers both the graph structure and the node attributes. The proposed method effectively measures node information by exploiting the receptive field of the graph convolutional network and captures the clustering structure of the data while considering the low redundancy and diversity of the labeled samples. Experiments conducted on real-world datasets demonstrate the superiority of the proposed approach over several state-of-the-art methods.