Abstract

PurposeThis study aims to explore the perception of algorithm accuracy among data professionals in higher education.Design/methodology/approachSocial justice theory guided the qualitative descriptive study and emphasized four principles: access, participation, equity and human rights. Data collection included eight online open-ended questionnaires and six semi-structured interviews. Participants included higher education professionals who have worked with predictive algorithm (PA) recommendations programmed with student data.FindingsParticipants are aware of systemic and racial bias in their PA inputs and outputs and acknowledge their responsibility to ethically use PA recommendations with students in historically underrepresented groups (HUGs). For some participants, examining these topics through the lens of social justice was a new experience, which caused them to look at PAs in new ways.Research limitations/implicationsSmall sample size is a limitation of the study. Implications for practice include increased stakeholder training, creating an ethical data strategy that protects students, incorporating adverse childhood experiences data with algorithm recommendations, and applying a modified critical race theory framework to algorithm outputs.Originality/valueThe study explored the perception of algorithm accuracy among data professionals in higher education. Examining this topic through a social justice lens contributes to limited research in the field. It also presents implications for addressing racial bias when using PAs with students in HUGs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call