Abstract
We propose a penalization algorithm for functional linear regression models, where the coefficient function β is shrunk towards a data-driven shape template γ. To the best of our knowledge, we employ the nonzero centered L2 penalty in a novel manner, as the center of the penalty γ is also optimized while being constrained to belong to a class of piecewise functions Γ, by restricting its basis expansion. This indirect penalization allows the user to control the overall shape of β, by imposing his prior knowledge on γ through the definition of Γ, without limiting the flexibility of the estimated model. In particular, we focus on the case where γ is expressed as a sum of q rectangles that are adaptively positioned with respect to the regression error. As the problem of finding the optimal knot placement of a piecewise function is nonconvex, we also propose a novel parametrization that allows to reduce the number of variables in the global optimization scheme, resulting in a fitting algorithm that alternates between approximating a suitable template and solving a convex ridge-like problem. The predictive power and interpretability of our method is shown on multiple simulations and two real world case studies.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.