Abstract

Machine Learning (ML) systems are now widely used across various fields such as hiring, healthcare, and criminal justice, but they are prone to unfairness and discrimination, which can have serious consequences for individuals and society. Although various fairness testing methods have been developed to tackle this issue, they lack the mechanism to continuously monitor ML system behaviour at runtime. In this study, a runtime verification tool called BiasTrap is proposed to detect and prevent discrimination in ML systems. The tool combines data augmentation and bias detection components to create and analyse instances with different sensitive attributes, enabling the detection of discriminatory behaviour in the ML model. The simulation results demonstrate that BiasTrap can effectively detect discriminatory behaviour in ML models trained on different datasets using various algorithms. Therefore, BiasTrap is a valuable tool for ensuring fairness in ML systems in real-time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.