The advancements in Generative Artificial Intelligence (GenAI) can provide opportunities for enriching educational experiences, but at the same time raise concerns regarding academic integrity. Many educators have expressed anxiety and hesitation when it comes to integrating GenAI in their teaching practices. Thus, recommendations and guidance from institutions are needed to support instructors in this new and emerging GenAI era. In response to this need, this study explores different U.S. universities' academic policies and guidelines regarding the use of GenAI tools (e.g., ChatGPT) for teaching and learning, and from there, gains understanding of how these universities respond and adapt to the development of GenAI in their academic contexts. Data sources include academic policies, statements, guidelines, and relevant resources provided by the top 100 universities in the U.S. Results show that the majority of these universities adopt an open but cautious approach towards GenAI. Primary concerns lie in ethical usage, accuracy, and data privacy. Most universities actively respond and provide diverse types of resources, such as syllabus templates, workshops, shared articles, and one-on-one consultations; focusing on a range of topics, namely general technical introduction, ethical concerns, pedagogical applications, preventive strategies, data privacy, limitations, and detective tools. The findings provide four practical pedagogical implications for educators when considering GenAI in teaching practices: 1) accepting GenAI presence, 2) aligning GenAI use with learning objectives, 3) evolving curriculum to prevent misuse of GenAI, and 4) adopting multifaceted evaluation strategies. For recommendations toward policy making, the article suggests two possible directions for the use of GenAI tools: 1) establishing discipline-specific policies and guidelines, and 2) managing students' sensitive information in a transparent and careful manner.
Read full abstract