Abstract

This chapter is concerned with the broad proliferation of artificial intelligence (AI) technologies in learning assessment, and further traces the implications of AI-enabled assessment technologies and practices for student equity and inclusion. After defining AI in educational contexts and questioning its often-triumphalist narrative, the chapter examines several examples of AI-enabled assessment and explores the ways in which each may produce inequitable or exclusionary outcomes for students. It then works to problematise recent attempts to utilise AI and machine learning (ML) techniques themselves to minimise or detect inequitable or unfair outcomes through the largely technological and statistical focus of the growing fairness, accountability, and transparency movement in the data sciences. The chapter's central argument is that technological solutions to equity and inclusion are of limited value, particularly when educational institutions fail to engage in genuine political negotiation with a range of stakeholders and domain experts. Universities, it is argued, should not cede their ethical and legal responsibility for ensuring inclusive AI-enabled assessment practices to third-party vendors, ill-equipped teaching staff, or to technological “solutions” such as algorithmic tests for “fairness”.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.