Abstract

Abstract The model-X conditional randomisation test (CRT) is a flexible and powerful testing procedure for testing the hypothesis X⫫Y∣Z. However, it requires perfect knowledge of X∣Z and may lose its validity when there is an error in modelling X∣Z. This problem is even more severe when Z is of high dimensionality. In response to this, we propose the Maxway CRT, which learns the distribution of Y∣Z and uses it to calibrate the resampling distribution of X to gain robustness to the error in modelling X. We prove that the type-I error inflation of the Maxway CRT can be controlled by the learning error for a low-dimensional adjusting model plus the product of learning errors for X∣Z and Y∣Z, interpreted as an ‘almost doubly robust’ property. Based on this, we develop implementing algorithms of the Maxway CRT in practical scenarios including (surrogate-assisted) semi-supervised learning (SA-SSL) and transfer learning (TL). Through simulations, we demonstrate that the Maxway CRT achieves significantly better type-I error control than existing model-X inference approaches while preserving similar powers. Finally, we apply our methodology to two real examples of SA-SSL and TL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call