Abstract

This research investigates bias in AI algorithms used for monitoring student progress, specifically focusing on bias related to age, disability, and gender. The study is motivated by incidents such as the UK A-level grading controversy, which demonstrated the real-world implications of biased algorithms. Using the Open University Learning Analytics Dataset, the research evaluates fairness with metrics like ABROCA, Average Odds Difference, and Equality of Opportunity Difference. The analysis is structured into three experiments. The first experiment examines fairness as an attribute of the data sources and reveals that institutional data is the primary contributor to model discrimination, followed by Virtual Learning Environment data, while assessment data is the least biased. In the second experiment, the research introduces the Optimal Time Index, which pinpoints Day 60 of an average 255-day course as the optimal time for predicting student outcomes, balancing timely interventions, model accuracy, and efficient resource allocation. The third experiment implements bias mitigation strategies throughout the model's life cycle, achieving fairness without compromising accuracy. Finally, this study introduces the Student Progress Card, designed to provide actionable personalized feedback for each student.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.