Abstract
Recently, the rapid advancement of AI hardware and tools has led to the widespread adoption of natural language transformers like OpenAI’s ChatGPT, Google Bard, Bing AI, and others in various business sectors. Nevertheless, for the academic community, these AI tools present both opportunities and threats. Like their counterparts in the business and industrial sectors, academics can leverage these AI tools for coding, idea/concept generation, planning, and other applications, benefiting from their global usage. However, the academic community also harbors concerns regarding the potential impact on academic integrity, as students may be tempted to rely on these tools to complete their essays, assignments, and exams without putting in their own efforts. In this article, we will present the authors’ approach and findings in dealing with these AI tools while evaluating students’ performance with two university student groups: engineering (Canadian) and communication technology (Taiwanese). We have identified key guidelines to deter students from directly copying answers provided by AI tools like ChatGPT. However, it is important to recognize that this approach will be an ongoing process, as AI tools continuously learn and adapt to new cases.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have