Algebraic multigrid (AMG) is an effective iterative algorithm for solving large-scale linear systems. One challenge of constructing the AMG algorithm is the determination of the prolongation operator, which affects the convergence rate of AMG and is problem-dependent. In this paper, we propose a new Learning-based Local Weighted Least Squares (L-LWLS) method to construct the prolongation operator of AMG. Specifically, we construct the prolongation operator by solving the LWLS model with learned spatially-varying weights. We use the gradient descent algorithm to optimize the model with a learned initialization of the solution. Then the constructed prolongation operator is further corrected by a learned correction function to improve the convergence rate of AMG. We conduct experiments on solving graph Laplacian linear systems, diffusion partial differential equations, and Helmholtz equations. Experiments show that the proposed method can construct a better prolongation operator leading to a faster convergence rate than the compared methods, including the classical AMG, the smoothed aggregation AMG, the bootstrap AMG, and the learning-based AMG method. The results show that the proposed method can generalize well to different parameter distributions and problem sizes, i.e., the number of variables in the linear system.