Due to the usage of global similarity, the hashing methods based on predefined hash centers have achieved more accurate retrieval results than the pairwise/triplet-based methods. Nevertheless, the fixed hash centers lack the perception of data distribution and are limited by the pre-determined Hadamard matrix, which consider neither the label semantic information nor the object scale size, resulting in sub-optimal retrieval performance and weak generalization ability. In this paper, we (1) adopt the label semantic information to generate self-adaptive hash centers and (2) propose the label-affinity coefficient ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">lac</i> ) that considers the scale size of each label/object appearing in the given image to calculate the real hash centroid for this image. Based on this, we propose <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Label-affinity Self-adaptive Central Similarity Hashing (LSCSH)</i> for image retrieval. LSCSH consists of a hash code generator module and a hash center adapter module. First, we obtain the label word vector ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i.e.</i> , the word vector representation of each class label) via the Word2Vector technique to generate and update the hash centers that adapt to the distribution of both label word vectors and generated hash codes. Second, we learn <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">lac</i> to indicate the dominance of different labels corresponding to objects in each given image, which considers the unequal scales of each object (corresponding to a label) to calculate a more accurate hash centroid for each image. Last but not least, we design an asynchronous learning mechanism to enable each hash code and its corresponding hash centroid to adapt to each other dynamically. We conduct extensive experiments on 5 image datasets including CIFAR-10, ImageNet, VOC2012, MS-COCO and NUS-WIDE. The experimental results demonstrate that LSCSH can achieve the state-of-the-art visual retrieval performance on both single-label and multi-label image datasets. The code of this work is released at: <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/lzHZWZ/LSCSH_sourcecode.git</uri> .