Abstract

A standard rule of thumb states that a model has too many parameters to be testable if and only if it has at least as many parameters as empirically observable quantities. We argue that when one asks whether a model has too many parameters to be testable, one implicitly refers to a particular type of testability, which we call quantitative testability. A model is defined to be quantitatively testable if the model's predictions have zero probability of being correct by chance. Next, we propose a new rule of thumb, based on the rank of the Jacobian matrix of a model (i.e., the matrix of partial derivatives of the function that maps the model's parameter values onto predicted experimental outcomes). According to this rule, a model is quantitatively testable if and only if the rank of the Jacobian matrix is less than the number of observables. (The rank of his matrix can be found with standard computer algorithms.) Using Sard's theorem, we prove that the proposed new rule of thumb is correct provided that certain “smoothness” conditions are satisfied. We also discuss the relation between quantitative testability and reparameterization, identifiability, and goodness-of-fit testing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call