The increase of generative artificial intelligence (GenAI) in education has emphasized prompt literacy and design, influencing the experience of GenAI-assisted learning. However, there is limited understanding regarding the various prompt patterns that emerge during student-GenAI interaction (SAI) within the context of navigating learning tasks at different levels of AI Literacy or how these patterns are associated with task performance. This study analyzed 19 university students’ prompt patterns when interacting with GenAI on academic writing tasks, grouping students by AI literacy level and examining how prompt patterns relate to performance. This study performed content analysis of student-GenAI chat histories and categorical data analysis of student interviews. Pattern differences were then visualized using Gephi 0.10.1. Distinctive prompt patterns were identified: high AI literacy students exhibited descriptive, context-based prompts in collaborative interactions, while low AI literacy students demonstrated general prompts in a student-directed approach. Additionally, student essays were evaluated by five experts to inform how prompt patterns relate to task performance and the Wilcoxon signed-rank test was applied. The two groups demonstrated significant differences across categories of writing performance (content, structure, and expression). This study provides implications for designing and implementing GenAI and student-centered AI-assisted instruction.
Read full abstract