Regularization aims to shrink model parameters, reducing complexity and overfitting risk. Traditional methods like LASSO and Ridge regression, limited by a single regularization hyperparameter, can restrict bias-variance trade-off adaptability. This paper addresses system identification in a multi-ridge regression framework, where an l2-penalty on the model coefficients is introduced, and a different regularization hyperparameter is assigned to each model parameter. To compute the optimal hyperparameters, a cross-validation-based criterion is optimized through gradient descent. Autoregressive and Output Error models are considered. The former requires formulating a regularized least-squares problem. The identification of the latter class is more challenging and is addressed by adopting regularized instrumental variable methods to ensure a consistent parameter estimation.
Read full abstract