Background Rapid changes brought on by generative artificial intelligence (AI) have emphasized the need to teach students to work with this technology while also developing the “robot proof” human skills future workers will need, such as creativity, communication, and critical thinking. Objective The study objective was to explore whether a fact-checking, generative-AI assignment, inserted between the outline, and first-draft stages of a student's literature review writing process, would relate to student classification, perceptions of AI accuracy, and future trust in AI-generated content. Method Students in upper and lower division psychology classes used AI to generate a literature review on their final paper topic, which they then fact-checked for accuracy and usefulness using a color-coded system. Results Lower division students expected more inaccuracy, highlighted less information as inaccurate, and reported greater future trust of AI-generated content than upper division students. Conclusion Students with more experience critically evaluating primary sources may be better equipped to detect inaccuracies within AI-generated content. Teaching Implications Teachers of any course requiring a literature review paper may use this assignment to encourage student use of AI with a critical eye toward recognizing where that content is incorrect.