Abstract

AbstractAssessment developers are increasingly using the developing technology of machine learning in transforming how to assess students in their science learning. I argue that these algorithmic models further embed the structures of inequality that are pervasive in the development of science assessments in how they legitimize certain language practices that protect the hierarchical standing of status quo interests. My argument is situated within the broader emerging ethical challenges around this new technology. I apply a raciolinguistic equity analysis framework in critiquing the “new black box” that reinforces structural forms of discrimination against the linguistic repertoires of racially marginalized student populations. The article ends with me sharing a set of tactical shifts that can be deployed to form a more equitable and socially‐just field of machine learning enhanced science assessments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.