In cross-modal retrieval, most existing hashing-based methods merely considered the relationship between feature representations to reduce the heterogeneous gap for data from various modalities, whereas they neglected the correlation between feature representations and the corresponding labels. This leads to the loss of significant semantic information, and the degradation of the class discriminability of the model. To tackle these issues, this paper presents a novel cross-modal retrieval method called coding self-representative and label-relaxed hashing (CSLRH) for cross-modal retrieval. Specifically, we propose a self-representation learning term to enhance the class-specific feature representations and reduce the noise interference. Additionally, we introduce a label-relaxed regression to establish semantic relations between the hash codes and the label information, aiming to enhance the semantic discriminability. Moreover, we incorporate a non-linear regression to capture the correlation of non-linear features in hash codes for cross-modal retrieval. Experimental results on three widely-used datasets verify the effectiveness of our proposed method, which can generate more discriminative hash codes to improve the precisions of cross-modal retrieval.