Abstract

Induction benefits from useful priors. Penalized regression approaches, like ridge regression, shrink weights toward zero but zero association is usually not a sensible prior. Inspired by simple and robust decision heuristics humans use, we constructed non-zero priors for penalized regression models that provide robust and interpretable solutions across several tasks. Our approach enables estimates from a constrained model to serve as a prior for a more general model, yielding a principled way to interpolate between models of differing complexity. We successfully applied this approach to a number of decision and classification problems, as well as analyzing simulated brain imaging data. Models with robust priors had excellent worst-case performance. Solutions followed from the form of the heuristic that was used to derive the prior. These new algorithms can serve applications in data analysis and machine learning, as well as help in understanding how people transition from novice to expert performance.

Highlights

  • Inference from data is most successful when it involves a helpful inductive bias or prior belief

  • These models are robust across the range of θ values because they converge to a reasonable estimate

  • We considered simulated functional magnetic resonance imaging time series that allowed for comparing estimates to ground truth

Read more

Summary

Introduction

Inference from data is most successful when it involves a helpful inductive bias or prior belief Regularized regression approaches, such as ridge regression, incorporate a penalty term that complements the fit term by providing a constraint on the solution, akin to how Occam’s razor favors solutions that both fit the observed data and are simple. Their weakness is insensitivity to aspects of the data due to their rigid inductive bias (Geman, Bienenstock, & Doursat, 1992; Parpart et al, 2018) This weakness is ameliorated when heuristics function as priors within more complex models because priors can be overcome by additional data, much like how human experts develop more complex and nuanced knowledge with increasing experience in a domain. Because the heuristics themselves are interpretable models, the solution of the encompassing model could be understood in terms of deviations from the heuristic prior

Robust priors based on decision-making heuristics
TAL and TTB heuristics
Application I
Methods
Results
Application II
Application III
Simulated fMRI data
General discussion
Declaration of competing interest
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call