Abstract

Reinforcement learning is one of the paradigms and methodologies of machine learning developed in the computational intelligence community. Reinforcement learning algorithms present a major challenge in complex dynamics recently. In the perspective of variable selection, we often come across situations where too many variables are included in the full model at the initial stage of modeling. Due to a high-dimensional and intractable integral of longitudinal data, likelihood inference is computationally challenging. It can be computationally difficult such as very slow convergence or even nonconvergence, for the computationally intensive methods. Recently, hierarchical likelihood (h-likelihood) plays an important role in inferences for models having unobservable or unobserved random variables. This paper focuses linear models with random effects in the mean structure and proposes a penalized h-likelihood algorithm which incorporates variable selection procedures in the setting of mean modeling via h-likelihood. The penalized h-likelihood method avoids the messy integration for the random effects and is computationally efficient. Furthermore, it demonstrates good performance in relevant-variable selection. Throughout theoretical analysis and simulations, it is confirmed that the penalized h-likelihood algorithm produces good fixed effect estimation results and can identify zero regression coefficients in modeling the mean structure.

Highlights

  • Reinforcement learning is specified as trial and error plus learning in Sutton and Barto [1]

  • hierarchical generalized linear models (HGLMs) are based on the idea of h-likelihood, a generalization of the classical likelihood to accommodate the random components coming through the model

  • Inspired by the idea of reinforcement learning and hierarchical models, this paper proposes a method by adding a penalty term to the h-likelihood. is method considers the fixed effects and the random effects in the linear model, and it produces good estimation results with the ability to identify zero regression coefficients in joint models of mean-covariance structures for high-dimensional multilevel data

Read more

Summary

Introduction

Reinforcement learning is specified as trial and error (variation and selection and search) plus learning (association and memory) in Sutton and Barto [1]. Erefore, to solve the problem of the random effects and to get good estimates, Lee and Nelder [4] proposed hierarchical generalized linear models (HGLMs). HGLMs are based on the idea of h-likelihood, a generalization of the classical likelihood to accommodate the random components coming through the model It is preferable because it avoids the integration part for the marginal likelihood and uses the conditional distribution instead. Inspired by the idea of reinforcement learning and hierarchical models, this paper proposes a method by adding a penalty term to the h-likelihood. E rest of this paper is organized as follows: Section 2 provides the literature review on current variable selection methods based on partial linear models and h-likelihood.

Literature Review
Variable Selection via Penalized h-Likelihood
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call