This is a continued study of Li et al. (2011) for the regularization of ill-conditioning problems, where the minimal singular value σmin of discrete matrices is infinitesimal. To remove the effects of high frequency caused by the singular vector of σmin better, we combine the Tikhonov regularization (TR) with the truncated singular value decomposition (TSVD) (simply denoted by T-TR). New computational formulas of the traditional condition number (Cond) and the effective condition number (Cond_eff) are derived, and a brief error analysis is made. The regularization parameter λ is involved in the TR, and its better (or optimal) choices are essential in both theory and computation. For ill-conditioning problems, stability is more imperative than accuracy. The most important criterion is to reduce the Cond. Consider the linear algebraic equations Ax = b. The optimal regularization parameter for the TR is derived as λ=σmaxσmin, where σmax and σmin(>0) are the maximal and the minimal singular values of matrix A, respectively. The L-curve techniques emphasize the ill-conditioning of ‖xλ‖ (see Hansen (1998)). Note that the norm ‖xλ‖ is only part of the Cond in stability. In fact, by using regularization, the Cond may be greatly reduced, while the errors do not increase much. The second important criterion is for both stability and solution accuracy. The sensitivity index proposed in Zhang et al. (2021) indicates the severity of ill-conditioning via accuracy for comparing different numerical methods and techniques. The sensitivity index is effective not only to select the source nodes in the method of fundamental solutions (MFS), but also to select the regularization parameter in the TR. The regularization choices of the T-TR are similar to those of the TR because differences of the Cond and the errors between the TR and the T-TR are insignificant. Numerical experiments are reported by the MFS for Laplace’s equation, to support the new regularization techniques proposed. For data fitting, imaging processing, pattern recognition, and machine learning, the ‖xλ‖ is, however, more important than Cond for ill-conditioning. The L-curve techniques are used in wide applications, and discussed in Hansen (1998). The parameter choices are rather complicated if the crossover region is not small. In this paper, we apply the sensitivity index to L-curves, the optimal parameter λL−curve can be found by the minimal sensitivity index. The λL−curve may also be found by the lowest line that passes the origin. The new algorithms are simple and effective.