Abstract

ABSTRACTChatGPT has shown promise in assisting qualitative researchers with coding. Previous efforts have primarily focused on datasets derived from interviews and observations, leaving document analysis, another crucial data source, relatively unexplored. In this project, we address the rapidly emerging topic of disinformation regulatory policy as a pilot to investigate ChatGPT's potential for document analysis. We adapt our existing qualitative research framework, which identifies five key components of disinformation policy: context, actors, issue, instrument, and channel, to sketch out policy documents. We then designed a two‐stage experiment employing a multi‐layer workflow using a dataset with highly relevant policy documents from US federal government departments. Through iteratively developing and refining six different prompt strategies, we identified an effective few‐shot learning strategy that achieved 72.0% accuracy and a 70.8% F‐score with the optimal prompt. Our experimental process and outcomes explore the feasibility of using ChatGPT to support manual coding for policy documents and suggest a coding approach for conducting explicit document analysis through an interactive process between researchers and ChatGPT. Furthermore, our results initiate a wider debate on how to integrate human logic with ChatGPT logic, along with the evolving relationship between researchers and AI tools.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.