ABSTRACT Remote sensing image retrieval (RSIR) refers to finding images from an image database that contain the same instance as the query image, which is an essential task in remote sensing applications. Traditional depth-based hashing algorithms usually convert the image library into a hash matrix with a specified number of digits. On the one hand, the quality of hash matrices generated by traditional methods is low and cannot guarantee good clustering between pictures of the same class. On the other hand, the ability to extract features using backbone networks must be improved. This paper proposes a deep hashing model called adaptive hash code balancing (AHCB) to solve two existing problems. The model introduces a balanced binary method to maximize the hash value entropy so that the generated hash has better clustering. Graph convolutional networks(GCNs) are introduced to automatically detect relevant data points in the graph, perform back-propagation, and propagate the updated network feedback to the feature extraction layer to improve the ability to extract features. It enables the model to learn the intrinsic data structure between remote sensing images. Experimental results on three public datasets show that the proposed method outperforms the current state-of-the-art deep hashing remote sensing image retrieval algorithms by a large margin.