Abstract

Background Rapid changes brought on by generative artificial intelligence (AI) have emphasized the need to teach students to work with this technology while also developing the “robot proof” human skills future workers will need, such as creativity, communication, and critical thinking. Objective The study objective was to explore whether a fact-checking, generative-AI assignment, inserted between the outline, and first-draft stages of a student's literature review writing process, would relate to student classification, perceptions of AI accuracy, and future trust in AI-generated content. Method Students in upper and lower division psychology classes used AI to generate a literature review on their final paper topic, which they then fact-checked for accuracy and usefulness using a color-coded system. Results Lower division students expected more inaccuracy, highlighted less information as inaccurate, and reported greater future trust of AI-generated content than upper division students. Conclusion Students with more experience critically evaluating primary sources may be better equipped to detect inaccuracies within AI-generated content. Teaching Implications Teachers of any course requiring a literature review paper may use this assignment to encourage student use of AI with a critical eye toward recognizing where that content is incorrect.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.