Abstract

In August 2020, students took to the streets of London in a mass protest over their A-level results (that determine entry to university), chanting “F*ck the algorithm.” Their A-level grades had been determined by an algorithm that depended on past school performance, a system of teacher ratings, and for smaller student groups, the use of Centre Assessed Grades. Commendably Ofqual (the U.K. Government Office of Qualifications and Examinations Regulation) were transparent in their use of the algorithm so that affected stakeholders were informed of the algorithm’s existence and the key details of how it was developed. The design of the data set produced an algorithm with biased outputs, which hardcoded preexisting educational, policy, and societal biases. Due to the large numbers of students, the single “results day” output and the wider interests of schools, teachers, parents, and universities, the unfairness of the outcome was magnified. Add to this that the students were, by definition, academically proficient individuals, many of whom derived from privileged and supportive backgrounds, which meant that they had the awareness and capacity to challenge. Hence, the impact of this biased algorithm was not received silently and so it was hard to deny, deflect, and dismiss. However, this is not always the case, and many systems are deployed with similar biases and harmful outcomes but the impact is dissipated over time and individuals, and therefore not so immediately obvious and easy to challenge. This is particularly the case when the people impacted do not have the advantages that enabled the A-level students to challenge.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call