Abstract

Sensitivity testing often involves sequential design strategies in small-sample settings that provide binary data which are then used to develop generalized linear models. Model parameters are usually estimated via maximum likelihood methods. Often, confidence bounds relating to model parameters and quantiles are based on the likelihood ratio. In this paper, it is demonstrated how the bias-corrected parametric bootstrap used in conjunction with approximate pivotal quantities can be used to provide an alternative means for constructing bounds when using a location-scale model. In small-sample settings, the coverage of bounds based on the likelihood ratio is often anticonservative due to bias in estimating the scale parameter. In contrast, bounds produced by the bias-corrected parametric bootstrap can provide accurate levels of coverage in such settings when both the sequential strategy and method for parameter estimation effectively adapt (are approximately equivariant) to the location and scale. A series of simulations illustrate this contrasting behavior in a small-sample setting when assuming a normal/probit model in conjunction with a popular sequential design strategy. In addition, it is shown how a high-fidelity assessment of performance can be attained with reduced computational effort by using the nonparametric bootstrap to resample pivotal quantities obtained from a small-scale set of parametric bootstrap simulations.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.