The various variants of the classical least square regression (LSR) have been extensively utilized in numerous applications. However, most previous linear regression methods only consider the fitting between the original features and the corresponding label information. They ignore the correlations among data points. Another problem with these methods is that the strict zero-one binary label matrix is utilized as the final regression target. The degree of freedom is minimal, and the binary label information cannot be adequately fitted. A semi-supervised learning discriminative sparse least square regression (DSLSR) algorithm with some notable characteristics is presented to address the above mentioned issues. Firstly, we utilize the estimated label of the observed samples to design a new objective function in the generalized regression form, further generalizing the previous least square regression framework. Secondly, a novel jointly sparse regularized term is designed to take full use of the estimated label information, which is expected to force the extracted features of each class to be jointly sparse instead of the learned projection being jointly sparse. Thirdly, Label estimation, label relaxation, locality, and joint sparsity are seamlessly integrated into the least square regression (LSR) framework. This property can make the margins of the observed data with the same class label as minimal as feasible while concurrently maximizing the margins of observed data from other classes. The experimental findings reveal that DSLSR outperforms state-of-the-art linear regression-based approaches.