Abstract

Understanding the role of regularization is a central question in statistical inference. Empirically, well-chosen regularization schemes often dramatically improve the quality of the inferred models by avoiding overfitting of the training data. We consider here the particular case of L 2 regularization in the maximum a posteriori (MAP) inference of generative pairwise graphical models. Based on analytical calculations on Gaussian multivariate distributions and numerical experiments on Gaussian and Potts models we study the likelihoods of the training, test, and ‘generated data’ (with the inferred models) sets as functions of the regularization strengths. We show in particular that, at its maximum, the test likelihood and the ‘generated’ likelihood, which quantifies the quality of the generated samples, have remarkably close values. The optimal value for the regularization strength is found to be approximately equal to the inverse sum of the squared couplings incoming on sites on the underlying network of interactions. Our results seem to be robust against changes in the structure of the ground-truth underlying interactions that generated the data, when small fluctuations of the posterior distribution around the MAP estimator are taken into account, and when L 1 regularization is considered (instead of L 2). Connections with empirical works on protein models learned from homologous sequences are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call