Abstract

Courts and authors have suggested that, under certain circumstances, claim aggregation — and statistical sampling procedures in particular — can increase not only efficiency, but accuracy as well. Such assertions have been used to rebut anti-aggregation arguments that are based on the premise that accuracy cannot be sacrificed for the sake of efficiency. But assertions that sampling procedures can increase accuracy have been met with scepticism and a general unwillingness to rely on such assertions in real-world contexts. The skepticism is arguably due to the fact that legal scholarship has not begun to contemplate, in any rigorous form, the practical effect of sampling procedures on accuracy under real-world conditions of both claim variability and judgment variability, and with realistic constraints imposed by the law. In the current article, I introduce a framework for examining the conditions under which sampling can increase accuracy in the law. In particular, I introduce a model for studying the effects of sampling on accuracy, and for deriving the optimal sample size, with respect to accuracy, under conditions of claim and judgment variability, and with constraints described by reductive sampling. I then discuss a number of important extensions, such as methods for estimating variability parameters and the use of sequential sampling and stratification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.