Abstract
ABSTRACT Researchers have worked to develop machine learning models that detect at-risk online gamblers, enabling personalized harm prevention tools. However, existing research has not evaluated these models’ potential to reinforce or amplify sociodemographic biases leading to treatment disparity, a recognized issue in the machine learning field. We sought to develop and compare three examples of potentially fair models using online gambling data. In two large samples of transaction data from a provincially owned Canadian gambling website (N1 = 9,145, N2 = 10,716), we developed three machine learning models based on competing concepts of fairness: fairness via unawareness, classification parity, and outcome calibration. We hypothesized that significant relationships existed between reporting a high risk of past-year gambling problems (the dependent variable) and participants’ age and sex. Further, we hypothesized that the three ‘fair’ models would show differing levels of classification performance both in aggregate and within sociodemographic groups. Significant age and sex effects were found, refuting the fairness via unawareness modeling strategy. Superiority across all performance metrics was not present for either of the remaining models. For the fairest practices in any jurisdiction, classification parity and outcome calibration models should be tested in situ, and incorporate the perspectives and preferences of end users who will be affected.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.