Abstract

How many random points from an identified set, a confidence set, or a highest posterior density set suffice to describe them? This paper argues that taking random draws from a parameter region in order to approximate its shape is a supervised learning problem (analogous to sampling pixels of an image to recognize it). Misclassification error – a common criterion in machine learning – provides an off-the-shelf tool to assess the quality of a given approximation. We say a parameter region can be learned if there is an algorithm that yields a misclassification error of at most ϵ with probability at least 1−δ, regardless of the sampling distribution. We show that learning a parameter region is possible if and only if its potential shapes are not too complex. Moreover, the tightest band that contains ad-dimensional parameter region is always learnable from the inside (in a sense we make precise), with at least max(1−ϵ)ln1∕δ,(3∕16)d∕ϵ draws, but at most min{2dln(2d∕δ),exp(1)(2d+ln(1∕δ))}∕ϵ. These bounds grow linearly in the dimension of the parameter region, and are uniform with respect to its true shape. We illustrate the usefulness of our results using structural vector autoregressions. We show how many orthogonal matrices are necessary/sufficient to evaluate the impulse responses’ identified set and how many ‘shotgun plots’ to report when conducting joint inference on impulse responses.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call