Abstract

AbstractThis systematic literature review investigates the fairness of machine learning algorithms in educational settings, focusing on recent studies and their proposed solutions to address biases. Applications analyzed include student dropout prediction, performance prediction, forum post classification, and recommender systems. We identify common strategies, such as adjusting sample weights, bias attenuation methods, fairness through un/awareness, and adversarial learning. Commonly used metrics for fairness assessment include ABROCA, group difference in performance, and disparity metrics. The review underscores the need for context-specific approaches to ensure equitable treatment and reveals that most studies found no strict tradeoff between fairness and accuracy. We recommend evaluating fairness of data and features before algorithmic fairness to prevent algorithms from receiving discriminatory inputs, expanding the scope of education fairness studies beyond gender and race to include other demographic attributes, and assessing the impact of fair algorithms on end users, as human perceptions may not align with algorithmic fairness measures.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.