This study examines the factors driving student non-compliance with AI use declarations in academic assessments at King’s Business School, where 74% of students failed to declare AI usage despite declaration being a requirement of a mandatory coursework coversheet. Utilising the Theory of Planned Behaviour (TPB) as a framework, the research combines service evaluation survey data and semi-structured interviews to explore how attitudes, subjective norms, and perceived behavioural control influence compliance. Findings reveal that fear of academic repercussions, ambiguous guidelines, inconsistent enforcement, and peer influence are key barriers to AI use declaration. These factors complicate the declaration process, undermine transparency, and challenge academic integrity. The study extends the TPB model by highlighting the ethical and practical dilemmas posed by generative AI, which blur traditional norms of academic integrity. This research offers critical insights for policymakers, suggesting that clear, consistent, and trust-based policies are crucial in fostering ethical AI use. The findings underscore the importance of transparent communication and supportive institutional cultures to improve compliance. Ultimately, this study informs policy development by evaluating the effectiveness of declaration mechanisms and providing actionable recommendations to promote a culture of academic integrity in the evolving landscape of AI technologies.
Read full abstract