Abstract
Recently, ChatGPT has been upgraded to its newer version for its unsubscribed users – ChatGPT 3.5. Though ChatGPT has become an astonishing phenomenon all over the world for creating realistic texts within seconds, it can disseminate wrong information and misconceptions. Technical experts have identified this problem as hallucination. This paper has examined ChatGPT’s ability to differentiate between correct and incorrect relations in the questions that are set to it. It has also explored the efficacy of ChatGPT in helping students acquire linguistic and literary proficiency. The study took the form of exploratory interpretive research. The participants of the research study were students studying English at the undergraduate level. Data was collected through semi-structured interviews, FGDs, and input provided to ChatGPT. All data were analyzed qualitatively. The findings of this research indicate that ChatGPT tends to provide inconsistent information when a series of contextual questions are asked. Because of this hallucination, ChatGPT becomes an unreliable source for language and literature learning.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Rupkatha Journal on Interdisciplinary Studies in Humanities
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.