Ethical considerations have become a central topic in education since artificial intelligence (AI) brought both great innovation and challenges to educational practices and systems. Values influence what we believe is morally right and guide how we behave ethically in different situations. However, there is limited empirical research on improving the alignment between the values embedded in technology and the values prioritised by learners. Using the approach of value‐sensitive design (VSD), this study conducted an empirical investigation to explore: (1) how ethical values of learners regarding facilitators were characterised in the online learning environment, (2) how specific features of ChatGPT and human experts as online learning facilitators embody these values and (3) what value tensions occur in the online learning environment. In order to address the research questions, we designed a comparative experiment about online writing and revision facilitated by ChatGPT‐4 and a human expert. We conducted semi‐structured interviews with 59 learners about their learning experiences and feelings after completing the experiment. The results showed that learners prioritised the value of responsiveness, social comfort, autonomy, freedom from bias and privacy during online learning. Compared with the human expert, ChatGPT as a facilitator presented features of tirelessness, friendliness and support for independent decision‐making in embodying the value of social comfort and autonomy. However, ChatGPT struggled to interpret learners' intentions and emotions and posed risks of information leakage, thereby presenting a deficiency in embodying the value of responsiveness and privacy. Value tensions arose both within learners' groups and between learners and other stakeholders, including developers and researchers. These tensions emerged from conflicting ethical values and pragmatic considerations in the online learning environment. Our findings highlight the importance of enhancing value alignment in online learning environments. The strategies for achieving this include developing value‐sensitive AI, leveraging the strengths of AI tools in embodying specific values, and expanding VSD methodology in AI's entire life cycle. Practitioner notesWhat is already known about this topic Using ChatGPT as an online learning facilitator has been demonstrated to have various advantages, but its use also brings ethical challenges, particularly in aligning its features with the values of learners. Value‐sensitive design (VSD) helps improve value alignment by embedding the values of stakeholders into the technology design. However, the values of learners regarding facilitators in online learning environments remain under investigation. What this paper adds We conducted a comparative experiment to investigate the value characteristics of learners, compare embodied features of AI and human experts, and identify the potential tension of values. We found that learners prioritised the value of responsiveness, social comfort, autonomy, freedom from bias and privacy in the online learning environment. We found that ChatGPT has shown advantages in embodying specific values compared with the human expert, but value tensions and misalignment still emerged during online learning. We found that value tensions not only arose within learner groups but also between learners and other stakeholders, such as developers and researchers. Implications for practice and/or policy Educational technology developers should embed stakeholders' values in AI tools to enhance value alignment and seek a balance between their values and the values of the learners. Educators should actively utilise AI as a powerful tool and maximise its advantages in embodying specific values. Researchers should consider expanding VSD methods to the entire life cycle of AI tools to accommodate value dynamism.
Read full abstract