Propensity Score Adjustment (PSA) is a widely accepted method to reduce selection bias in nonprobability samples. In this approach, the (unknown) response probability of each individual is estimated in a nonprobability sample, using a reference probability sample. This, the researcher obtains a representation of the target population, reflecting the differences (for a set of auxiliary variables) between the population and the nonprobability sample, from which response probabilities can be estimated.Auxiliary probability samples are usually produced by surveys with complex sampling designs, meaning that the use of design weights is crucial to accurately calculate response probabilities. When a linear model is used for this task, maximising a pseudo log-likelihood function which involves design weights provides consistent estimates for the inverse probability weighting estimator. However, little is known about how design weights may benefit the estimates when techniques such as machine learning classifiers are used.This study aims to investigate the behaviour of Propensity Score Adjustment with machine learning classifiers, subject to the use of weights in the modelling step. A theoretical approximation to the problem is presented, together with a simulation study highlighting the properties of estimators using different types of weights in the propensity modelling step.
Read full abstract